Dec 15 05:37:15 crc systemd[1]: Starting Kubernetes Kubelet... Dec 15 05:37:15 crc restorecon[4572]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:15 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 15 05:37:16 crc restorecon[4572]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 15 05:37:16 crc restorecon[4572]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 15 05:37:16 crc kubenswrapper[4747]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 15 05:37:16 crc kubenswrapper[4747]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 15 05:37:16 crc kubenswrapper[4747]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 15 05:37:16 crc kubenswrapper[4747]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 15 05:37:16 crc kubenswrapper[4747]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 15 05:37:16 crc kubenswrapper[4747]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.489841 4747 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492093 4747 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492110 4747 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492114 4747 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492119 4747 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492123 4747 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492128 4747 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492133 4747 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492137 4747 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492141 4747 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492144 4747 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492148 4747 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492151 4747 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492155 4747 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492166 4747 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492171 4747 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492476 4747 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492490 4747 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492495 4747 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492500 4747 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492504 4747 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492509 4747 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492513 4747 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492518 4747 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492521 4747 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492525 4747 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492528 4747 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492531 4747 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492535 4747 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492538 4747 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492542 4747 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492545 4747 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492548 4747 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492552 4747 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492555 4747 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492558 4747 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492561 4747 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492564 4747 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492568 4747 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492571 4747 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492575 4747 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492578 4747 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492581 4747 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492585 4747 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492588 4747 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492593 4747 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492596 4747 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492600 4747 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492603 4747 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492606 4747 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492609 4747 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492612 4747 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492616 4747 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492620 4747 feature_gate.go:330] unrecognized feature gate: Example Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492625 4747 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492629 4747 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492634 4747 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492639 4747 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492645 4747 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492650 4747 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492655 4747 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492659 4747 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492663 4747 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492667 4747 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492672 4747 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492676 4747 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492681 4747 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492685 4747 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492689 4747 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492693 4747 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492697 4747 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.492701 4747 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493252 4747 flags.go:64] FLAG: --address="0.0.0.0" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493266 4747 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493274 4747 flags.go:64] FLAG: --anonymous-auth="true" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493280 4747 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493286 4747 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493290 4747 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493295 4747 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493300 4747 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493304 4747 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493308 4747 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493313 4747 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493317 4747 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493322 4747 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493326 4747 flags.go:64] FLAG: --cgroup-root="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493330 4747 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493334 4747 flags.go:64] FLAG: --client-ca-file="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493338 4747 flags.go:64] FLAG: --cloud-config="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493342 4747 flags.go:64] FLAG: --cloud-provider="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493346 4747 flags.go:64] FLAG: --cluster-dns="[]" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493353 4747 flags.go:64] FLAG: --cluster-domain="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493358 4747 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493364 4747 flags.go:64] FLAG: --config-dir="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493368 4747 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493373 4747 flags.go:64] FLAG: --container-log-max-files="5" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493378 4747 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493382 4747 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493388 4747 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493393 4747 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493399 4747 flags.go:64] FLAG: --contention-profiling="false" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493403 4747 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493409 4747 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493414 4747 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493418 4747 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493424 4747 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493428 4747 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493432 4747 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493437 4747 flags.go:64] FLAG: --enable-load-reader="false" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493441 4747 flags.go:64] FLAG: --enable-server="true" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493446 4747 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493452 4747 flags.go:64] FLAG: --event-burst="100" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493456 4747 flags.go:64] FLAG: --event-qps="50" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493460 4747 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493466 4747 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493470 4747 flags.go:64] FLAG: --eviction-hard="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493476 4747 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493479 4747 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493483 4747 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493488 4747 flags.go:64] FLAG: --eviction-soft="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493492 4747 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493496 4747 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493500 4747 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493505 4747 flags.go:64] FLAG: --experimental-mounter-path="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493509 4747 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493513 4747 flags.go:64] FLAG: --fail-swap-on="true" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493517 4747 flags.go:64] FLAG: --feature-gates="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493522 4747 flags.go:64] FLAG: --file-check-frequency="20s" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493526 4747 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493530 4747 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493534 4747 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493538 4747 flags.go:64] FLAG: --healthz-port="10248" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493542 4747 flags.go:64] FLAG: --help="false" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493546 4747 flags.go:64] FLAG: --hostname-override="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493550 4747 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493554 4747 flags.go:64] FLAG: --http-check-frequency="20s" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493558 4747 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493562 4747 flags.go:64] FLAG: --image-credential-provider-config="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493566 4747 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493570 4747 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493573 4747 flags.go:64] FLAG: --image-service-endpoint="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493577 4747 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493581 4747 flags.go:64] FLAG: --kube-api-burst="100" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493585 4747 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493589 4747 flags.go:64] FLAG: --kube-api-qps="50" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493593 4747 flags.go:64] FLAG: --kube-reserved="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493597 4747 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493601 4747 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493604 4747 flags.go:64] FLAG: --kubelet-cgroups="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493608 4747 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493612 4747 flags.go:64] FLAG: --lock-file="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493616 4747 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493620 4747 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493623 4747 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493629 4747 flags.go:64] FLAG: --log-json-split-stream="false" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493633 4747 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493636 4747 flags.go:64] FLAG: --log-text-split-stream="false" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493640 4747 flags.go:64] FLAG: --logging-format="text" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493644 4747 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493648 4747 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493651 4747 flags.go:64] FLAG: --manifest-url="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493655 4747 flags.go:64] FLAG: --manifest-url-header="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493659 4747 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493663 4747 flags.go:64] FLAG: --max-open-files="1000000" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493667 4747 flags.go:64] FLAG: --max-pods="110" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493671 4747 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493674 4747 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493678 4747 flags.go:64] FLAG: --memory-manager-policy="None" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493682 4747 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493686 4747 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493690 4747 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493694 4747 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493703 4747 flags.go:64] FLAG: --node-status-max-images="50" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493707 4747 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493711 4747 flags.go:64] FLAG: --oom-score-adj="-999" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493715 4747 flags.go:64] FLAG: --pod-cidr="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493719 4747 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493726 4747 flags.go:64] FLAG: --pod-manifest-path="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493730 4747 flags.go:64] FLAG: --pod-max-pids="-1" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493734 4747 flags.go:64] FLAG: --pods-per-core="0" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493737 4747 flags.go:64] FLAG: --port="10250" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493741 4747 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493744 4747 flags.go:64] FLAG: --provider-id="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493751 4747 flags.go:64] FLAG: --qos-reserved="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493755 4747 flags.go:64] FLAG: --read-only-port="10255" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493759 4747 flags.go:64] FLAG: --register-node="true" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493762 4747 flags.go:64] FLAG: --register-schedulable="true" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493766 4747 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493771 4747 flags.go:64] FLAG: --registry-burst="10" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493775 4747 flags.go:64] FLAG: --registry-qps="5" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493779 4747 flags.go:64] FLAG: --reserved-cpus="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493782 4747 flags.go:64] FLAG: --reserved-memory="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493787 4747 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493790 4747 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493795 4747 flags.go:64] FLAG: --rotate-certificates="false" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493799 4747 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493803 4747 flags.go:64] FLAG: --runonce="false" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493807 4747 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493813 4747 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493818 4747 flags.go:64] FLAG: --seccomp-default="false" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493822 4747 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493826 4747 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493832 4747 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493835 4747 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493840 4747 flags.go:64] FLAG: --storage-driver-password="root" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493843 4747 flags.go:64] FLAG: --storage-driver-secure="false" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493846 4747 flags.go:64] FLAG: --storage-driver-table="stats" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493850 4747 flags.go:64] FLAG: --storage-driver-user="root" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493854 4747 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493858 4747 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493863 4747 flags.go:64] FLAG: --system-cgroups="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493866 4747 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493872 4747 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493875 4747 flags.go:64] FLAG: --tls-cert-file="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493879 4747 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493885 4747 flags.go:64] FLAG: --tls-min-version="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493889 4747 flags.go:64] FLAG: --tls-private-key-file="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493892 4747 flags.go:64] FLAG: --topology-manager-policy="none" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493896 4747 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493899 4747 flags.go:64] FLAG: --topology-manager-scope="container" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493903 4747 flags.go:64] FLAG: --v="2" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493907 4747 flags.go:64] FLAG: --version="false" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493912 4747 flags.go:64] FLAG: --vmodule="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493916 4747 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.493934 4747 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494030 4747 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494044 4747 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494048 4747 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494051 4747 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494055 4747 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494058 4747 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494061 4747 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494064 4747 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494067 4747 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494071 4747 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494074 4747 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494078 4747 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494081 4747 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494084 4747 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494088 4747 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494091 4747 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494095 4747 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494098 4747 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494101 4747 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494104 4747 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494107 4747 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494111 4747 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494115 4747 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494118 4747 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494123 4747 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494127 4747 feature_gate.go:330] unrecognized feature gate: Example Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494130 4747 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494134 4747 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494138 4747 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494141 4747 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494144 4747 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494147 4747 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494151 4747 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494154 4747 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494158 4747 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494161 4747 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494166 4747 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494170 4747 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494174 4747 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494177 4747 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494180 4747 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494183 4747 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494187 4747 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494190 4747 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494193 4747 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494196 4747 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494199 4747 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494203 4747 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494206 4747 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494209 4747 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494212 4747 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494215 4747 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494218 4747 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494221 4747 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494226 4747 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494230 4747 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494234 4747 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494238 4747 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494242 4747 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494246 4747 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494249 4747 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494252 4747 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494255 4747 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494258 4747 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494261 4747 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494264 4747 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494267 4747 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494270 4747 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494273 4747 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494277 4747 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.494280 4747 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.494903 4747 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.502598 4747 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.502624 4747 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503551 4747 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503590 4747 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503596 4747 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503601 4747 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503606 4747 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503614 4747 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503624 4747 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503628 4747 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503634 4747 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503638 4747 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503643 4747 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503647 4747 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503652 4747 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503656 4747 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503660 4747 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503666 4747 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503672 4747 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503677 4747 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503681 4747 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503686 4747 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503690 4747 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503694 4747 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503698 4747 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503702 4747 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503706 4747 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503709 4747 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503714 4747 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503718 4747 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503722 4747 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503726 4747 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503730 4747 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503736 4747 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503744 4747 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503751 4747 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503757 4747 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503763 4747 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503769 4747 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503775 4747 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503779 4747 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503783 4747 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503786 4747 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503790 4747 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503794 4747 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503798 4747 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503802 4747 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503805 4747 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503809 4747 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503813 4747 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503817 4747 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503822 4747 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503826 4747 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503829 4747 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503833 4747 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503837 4747 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503841 4747 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503846 4747 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503850 4747 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503854 4747 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503859 4747 feature_gate.go:330] unrecognized feature gate: Example Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503863 4747 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503868 4747 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503873 4747 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503877 4747 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503880 4747 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503884 4747 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503888 4747 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503892 4747 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503898 4747 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503902 4747 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503905 4747 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.503909 4747 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.503917 4747 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504349 4747 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504358 4747 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504363 4747 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504367 4747 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504371 4747 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504375 4747 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504380 4747 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504385 4747 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504390 4747 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504393 4747 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504397 4747 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504401 4747 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504406 4747 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504410 4747 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504414 4747 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504419 4747 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504423 4747 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504427 4747 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504430 4747 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504434 4747 feature_gate.go:330] unrecognized feature gate: Example Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504438 4747 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504441 4747 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504445 4747 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504449 4747 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504453 4747 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504457 4747 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504460 4747 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504464 4747 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504467 4747 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504471 4747 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504474 4747 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504478 4747 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504483 4747 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504487 4747 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504491 4747 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504495 4747 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504498 4747 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504502 4747 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504506 4747 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504510 4747 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504513 4747 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504517 4747 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504520 4747 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504524 4747 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504527 4747 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504530 4747 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504534 4747 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504537 4747 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504543 4747 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504547 4747 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504550 4747 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504553 4747 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504557 4747 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504561 4747 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504565 4747 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504569 4747 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504572 4747 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504576 4747 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504580 4747 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504584 4747 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504588 4747 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504591 4747 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504594 4747 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504598 4747 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504601 4747 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504604 4747 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504607 4747 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504610 4747 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504614 4747 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504618 4747 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.504622 4747 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.504626 4747 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.504815 4747 server.go:940] "Client rotation is on, will bootstrap in background" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.507528 4747 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.507630 4747 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.509645 4747 server.go:997] "Starting client certificate rotation" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.509676 4747 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.509868 4747 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-22 02:25:58.47321037 +0000 UTC Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.509980 4747 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.521691 4747 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.523147 4747 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 15 05:37:16 crc kubenswrapper[4747]: E1215 05:37:16.524676 4747 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.25.116:6443: connect: connection refused" logger="UnhandledError" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.532768 4747 log.go:25] "Validated CRI v1 runtime API" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.549773 4747 log.go:25] "Validated CRI v1 image API" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.551070 4747 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.556473 4747 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-15-05-34-08-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.556498 4747 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:49 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/containers/storage/overlay-containers/75d81934760b26101869fbd8e4b5954c62b019c1cc3e5a0c9f82ed8de46b3b22/userdata/shm:{mountpoint:/var/lib/containers/storage/overlay-containers/75d81934760b26101869fbd8e4b5954c62b019c1cc3e5a0c9f82ed8de46b3b22/userdata/shm major:0 minor:42 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:50 fsType:tmpfs blockSize:0} overlay_0-43:{mountpoint:/var/lib/containers/storage/overlay/94b752e0a51c0134b00ddef6dc7a933a9d7c1d9bdc88a18dae4192a0d557d623/merged major:0 minor:43 fsType:overlay blockSize:0}] Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.570463 4747 manager.go:217] Machine: {Timestamp:2025-12-15 05:37:16.569001529 +0000 UTC m=+0.265513467 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2445404 MemoryCapacity:33654120448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:f6e6c4c5-517c-43b9-abbe-241c399d7f32 BootID:4ef03cda-5cb6-4966-bf0f-23d213ae8ebc Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/var/lib/containers/storage/overlay-containers/75d81934760b26101869fbd8e4b5954c62b019c1cc3e5a0c9f82ed8de46b3b22/userdata/shm DeviceMajor:0 DeviceMinor:42 Capacity:65536000 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-43 DeviceMajor:0 DeviceMinor:43 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:49 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:50 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:00:a6:d8 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:enp3s0 MacAddress:fa:16:3e:00:a6:d8 Speed:-1 Mtu:1500} {Name:enp7s0 MacAddress:fa:16:3e:92:63:10 Speed:-1 Mtu:1440} {Name:enp7s0.20 MacAddress:52:54:00:21:2b:7b Speed:-1 Mtu:1436} {Name:enp7s0.21 MacAddress:52:54:00:bf:21:2e Speed:-1 Mtu:1436} {Name:enp7s0.22 MacAddress:52:54:00:23:d7:fa Speed:-1 Mtu:1436} {Name:eth10 MacAddress:ee:3f:20:25:30:36 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:16:b3:c6:10:f8:0d Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654120448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:65536 Type:Data Level:1} {Id:0 Size:65536 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:65536 Type:Data Level:1} {Id:1 Size:65536 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:65536 Type:Data Level:1} {Id:10 Size:65536 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:65536 Type:Data Level:1} {Id:11 Size:65536 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:65536 Type:Data Level:1} {Id:2 Size:65536 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:65536 Type:Data Level:1} {Id:3 Size:65536 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:65536 Type:Data Level:1} {Id:4 Size:65536 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:65536 Type:Data Level:1} {Id:5 Size:65536 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:65536 Type:Data Level:1} {Id:6 Size:65536 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:65536 Type:Data Level:1} {Id:7 Size:65536 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:65536 Type:Data Level:1} {Id:8 Size:65536 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:65536 Type:Data Level:1} {Id:9 Size:65536 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.570656 4747 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.570846 4747 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.571597 4747 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.571767 4747 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.571797 4747 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.571996 4747 topology_manager.go:138] "Creating topology manager with none policy" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.572009 4747 container_manager_linux.go:303] "Creating device plugin manager" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.572320 4747 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.572353 4747 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.572448 4747 state_mem.go:36] "Initialized new in-memory state store" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.572519 4747 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.574234 4747 kubelet.go:418] "Attempting to sync node with API server" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.574257 4747 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.574278 4747 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.574291 4747 kubelet.go:324] "Adding apiserver pod source" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.574306 4747 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.577261 4747 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.577373 4747 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 192.168.25.116:6443: connect: connection refused Dec 15 05:37:16 crc kubenswrapper[4747]: E1215 05:37:16.577461 4747 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 192.168.25.116:6443: connect: connection refused" logger="UnhandledError" Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.577548 4747 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.25.116:6443: connect: connection refused Dec 15 05:37:16 crc kubenswrapper[4747]: E1215 05:37:16.577615 4747 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.25.116:6443: connect: connection refused" logger="UnhandledError" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.577996 4747 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.579235 4747 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.580154 4747 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.580182 4747 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.580190 4747 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.580199 4747 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.580210 4747 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.580218 4747 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.580226 4747 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.580237 4747 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.580247 4747 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.580256 4747 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.580266 4747 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.580273 4747 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.580919 4747 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.581366 4747 server.go:1280] "Started kubelet" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.582083 4747 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.582088 4747 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 15 05:37:16 crc systemd[1]: Started Kubernetes Kubelet. Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.582572 4747 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 192.168.25.116:6443: connect: connection refused Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.582739 4747 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.584028 4747 server.go:460] "Adding debug handlers to kubelet server" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.584814 4747 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.584989 4747 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.585176 4747 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 12:58:56.947341256 +0000 UTC Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.588123 4747 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.588206 4747 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.588256 4747 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 15 05:37:16 crc kubenswrapper[4747]: E1215 05:37:16.588224 4747 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.588750 4747 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.25.116:6443: connect: connection refused Dec 15 05:37:16 crc kubenswrapper[4747]: E1215 05:37:16.588820 4747 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.25.116:6443: connect: connection refused" logger="UnhandledError" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.589524 4747 factory.go:55] Registering systemd factory Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.589609 4747 factory.go:221] Registration of the systemd container factory successfully Dec 15 05:37:16 crc kubenswrapper[4747]: E1215 05:37:16.589860 4747 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.116:6443: connect: connection refused" interval="200ms" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.590097 4747 factory.go:153] Registering CRI-O factory Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.590158 4747 factory.go:221] Registration of the crio container factory successfully Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.590259 4747 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.590406 4747 factory.go:103] Registering Raw factory Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.590454 4747 manager.go:1196] Started watching for new ooms in manager Dec 15 05:37:16 crc kubenswrapper[4747]: E1215 05:37:16.589688 4747 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 192.168.25.116:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18814cdbf9f8ff37 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-15 05:37:16.581338935 +0000 UTC m=+0.277850852,LastTimestamp:2025-12-15 05:37:16.581338935 +0000 UTC m=+0.277850852,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.592695 4747 manager.go:319] Starting recovery of all containers Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596152 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596199 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596212 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596223 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596234 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596243 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596253 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596264 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596278 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596291 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596301 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596311 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596321 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596335 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596345 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596356 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596367 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596376 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596388 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596398 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596415 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596427 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596438 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596448 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596473 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596486 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596499 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596524 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596534 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596546 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596557 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596569 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596584 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596596 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596607 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596624 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596637 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596651 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596665 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596682 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596693 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596704 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596715 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596727 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596738 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596765 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596776 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596788 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596801 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596812 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596821 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596833 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596847 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596858 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596870 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596879 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596892 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596902 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596911 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596939 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596962 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596972 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596982 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.596992 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597004 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597015 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597025 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597048 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597059 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597069 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597081 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597090 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597101 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597111 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597124 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597135 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597146 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597157 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597167 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597177 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597189 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597199 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597209 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597221 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597232 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597245 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597256 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597267 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597278 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597288 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597298 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597308 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597319 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597329 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597340 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597350 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597360 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597370 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597381 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597392 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597404 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597413 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597423 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597433 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597449 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597461 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597477 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597488 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597500 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597511 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597523 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597537 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597550 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597561 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597575 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597586 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597597 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597608 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597620 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597634 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597646 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597658 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597669 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597680 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597690 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597704 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597715 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597726 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597738 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597750 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597760 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597770 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597781 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597791 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597803 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597818 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597829 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597840 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597855 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597866 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597878 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597889 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597899 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597911 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597937 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597949 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597960 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597971 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.597983 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599336 4747 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599360 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599374 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599385 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599395 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599408 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599419 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599430 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599440 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599451 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599463 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599477 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599487 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599500 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599511 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599524 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599535 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599545 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599555 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599566 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599576 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599587 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599597 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599609 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599620 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599631 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599646 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599656 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599698 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599711 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599724 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599735 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599749 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599761 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599776 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599787 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599804 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599814 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599826 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599836 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599849 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599860 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599871 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599884 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599896 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599908 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599920 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599949 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599961 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599973 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599982 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.599992 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.600002 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.600012 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.600022 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.600032 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.600054 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.600065 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.600076 4747 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.600088 4747 reconstruct.go:97] "Volume reconstruction finished" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.600096 4747 reconciler.go:26] "Reconciler: start to sync state" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.609164 4747 manager.go:324] Recovery completed Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.619383 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.620967 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.621021 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.621042 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.622278 4747 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.622301 4747 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.622324 4747 state_mem.go:36] "Initialized new in-memory state store" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.626398 4747 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.627007 4747 policy_none.go:49] "None policy: Start" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.627818 4747 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.627846 4747 state_mem.go:35] "Initializing new in-memory state store" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.627956 4747 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.627990 4747 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.628014 4747 kubelet.go:2335] "Starting kubelet main sync loop" Dec 15 05:37:16 crc kubenswrapper[4747]: E1215 05:37:16.628069 4747 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 15 05:37:16 crc kubenswrapper[4747]: W1215 05:37:16.629860 4747 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.25.116:6443: connect: connection refused Dec 15 05:37:16 crc kubenswrapper[4747]: E1215 05:37:16.631371 4747 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.25.116:6443: connect: connection refused" logger="UnhandledError" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.684758 4747 manager.go:334] "Starting Device Plugin manager" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.684811 4747 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.684827 4747 server.go:79] "Starting device plugin registration server" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.685213 4747 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.685239 4747 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.685505 4747 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.685605 4747 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.685620 4747 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 15 05:37:16 crc kubenswrapper[4747]: E1215 05:37:16.692468 4747 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.728533 4747 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.728632 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.729510 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.729559 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.729574 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.729782 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.730015 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.730079 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.730767 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.730825 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.730848 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.730858 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.730829 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.730877 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.731150 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.731228 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.731266 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.732069 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.732098 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.732136 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.732148 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.732103 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.732193 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.732390 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.732533 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.732568 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.733193 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.733228 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.733260 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.733367 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.733393 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.733406 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.733395 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.733493 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.733528 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.733993 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.734014 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.734024 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.734150 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.734181 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.734198 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.734210 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.734200 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.734846 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.734867 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.734875 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.785581 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.786255 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.786291 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.786304 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.786330 4747 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 15 05:37:16 crc kubenswrapper[4747]: E1215 05:37:16.786795 4747 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.25.116:6443: connect: connection refused" node="crc" Dec 15 05:37:16 crc kubenswrapper[4747]: E1215 05:37:16.791069 4747 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.116:6443: connect: connection refused" interval="400ms" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.801215 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.801261 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.801287 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.801306 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.801328 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.801350 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.801389 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.801417 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.801440 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.801460 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.801485 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.801505 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.801535 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.801551 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.801593 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.902819 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.902855 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.902873 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.902913 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.902980 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.903043 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.903027 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.903002 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.903102 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.903140 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.903181 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.903204 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.903237 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.903206 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.903306 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.903362 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.903386 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.903399 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.903423 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.903437 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.903437 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.903444 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.903474 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.903461 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.903503 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.903529 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.903558 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.903598 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.903606 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.903705 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.987182 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.988027 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.988073 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.988086 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:16 crc kubenswrapper[4747]: I1215 05:37:16.988111 4747 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 15 05:37:16 crc kubenswrapper[4747]: E1215 05:37:16.988496 4747 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.25.116:6443: connect: connection refused" node="crc" Dec 15 05:37:17 crc kubenswrapper[4747]: I1215 05:37:17.070795 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 15 05:37:17 crc kubenswrapper[4747]: I1215 05:37:17.075579 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 15 05:37:17 crc kubenswrapper[4747]: W1215 05:37:17.094256 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-c13b3819ccf48a01497609bb04bcb650c79d449fd2277b4ffbb9731b1ce21198 WatchSource:0}: Error finding container c13b3819ccf48a01497609bb04bcb650c79d449fd2277b4ffbb9731b1ce21198: Status 404 returned error can't find the container with id c13b3819ccf48a01497609bb04bcb650c79d449fd2277b4ffbb9731b1ce21198 Dec 15 05:37:17 crc kubenswrapper[4747]: W1215 05:37:17.096318 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-2b58b93ff63e1ed176e7006f47641febcb7ca5acdb36dcd39c3169ebee2567af WatchSource:0}: Error finding container 2b58b93ff63e1ed176e7006f47641febcb7ca5acdb36dcd39c3169ebee2567af: Status 404 returned error can't find the container with id 2b58b93ff63e1ed176e7006f47641febcb7ca5acdb36dcd39c3169ebee2567af Dec 15 05:37:17 crc kubenswrapper[4747]: I1215 05:37:17.097625 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 15 05:37:17 crc kubenswrapper[4747]: W1215 05:37:17.111126 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-181f2066b6934512d61b655381e5ab5536e822013d12f5c1b95c60a0c8070cc3 WatchSource:0}: Error finding container 181f2066b6934512d61b655381e5ab5536e822013d12f5c1b95c60a0c8070cc3: Status 404 returned error can't find the container with id 181f2066b6934512d61b655381e5ab5536e822013d12f5c1b95c60a0c8070cc3 Dec 15 05:37:17 crc kubenswrapper[4747]: I1215 05:37:17.115816 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 15 05:37:17 crc kubenswrapper[4747]: I1215 05:37:17.121174 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 15 05:37:17 crc kubenswrapper[4747]: W1215 05:37:17.128977 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-b3d8e60e5ed60e13fed92624cc17f9d9d0421c524a0e2326da600fed4354301d WatchSource:0}: Error finding container b3d8e60e5ed60e13fed92624cc17f9d9d0421c524a0e2326da600fed4354301d: Status 404 returned error can't find the container with id b3d8e60e5ed60e13fed92624cc17f9d9d0421c524a0e2326da600fed4354301d Dec 15 05:37:17 crc kubenswrapper[4747]: W1215 05:37:17.134259 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-bd6637b974417763046fbd9ae46bf49352e5f317384ce96668a0042ed51f6400 WatchSource:0}: Error finding container bd6637b974417763046fbd9ae46bf49352e5f317384ce96668a0042ed51f6400: Status 404 returned error can't find the container with id bd6637b974417763046fbd9ae46bf49352e5f317384ce96668a0042ed51f6400 Dec 15 05:37:17 crc kubenswrapper[4747]: E1215 05:37:17.192096 4747 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.116:6443: connect: connection refused" interval="800ms" Dec 15 05:37:17 crc kubenswrapper[4747]: I1215 05:37:17.388909 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 15 05:37:17 crc kubenswrapper[4747]: I1215 05:37:17.390490 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:17 crc kubenswrapper[4747]: I1215 05:37:17.390719 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:17 crc kubenswrapper[4747]: I1215 05:37:17.390730 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:17 crc kubenswrapper[4747]: I1215 05:37:17.390761 4747 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 15 05:37:17 crc kubenswrapper[4747]: E1215 05:37:17.391181 4747 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.25.116:6443: connect: connection refused" node="crc" Dec 15 05:37:17 crc kubenswrapper[4747]: I1215 05:37:17.584510 4747 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 192.168.25.116:6443: connect: connection refused Dec 15 05:37:17 crc kubenswrapper[4747]: I1215 05:37:17.585482 4747 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 15:43:45.377997182 +0000 UTC Dec 15 05:37:17 crc kubenswrapper[4747]: I1215 05:37:17.637209 4747 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="8bc359099b8d59b6ef6bdf25fa9b5b30e446e9897cfb56b16408dab6a88f8c54" exitCode=0 Dec 15 05:37:17 crc kubenswrapper[4747]: I1215 05:37:17.637313 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"8bc359099b8d59b6ef6bdf25fa9b5b30e446e9897cfb56b16408dab6a88f8c54"} Dec 15 05:37:17 crc kubenswrapper[4747]: I1215 05:37:17.637436 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"bd6637b974417763046fbd9ae46bf49352e5f317384ce96668a0042ed51f6400"} Dec 15 05:37:17 crc kubenswrapper[4747]: I1215 05:37:17.637544 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 15 05:37:17 crc kubenswrapper[4747]: I1215 05:37:17.638651 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:17 crc kubenswrapper[4747]: I1215 05:37:17.638685 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:17 crc kubenswrapper[4747]: I1215 05:37:17.638695 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:17 crc kubenswrapper[4747]: I1215 05:37:17.640834 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9fe7a2e5d3edede6651bef4bed5fb61df5e777cb2da607c135c9e529ad6d58fd"} Dec 15 05:37:17 crc kubenswrapper[4747]: I1215 05:37:17.640904 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b3d8e60e5ed60e13fed92624cc17f9d9d0421c524a0e2326da600fed4354301d"} Dec 15 05:37:17 crc kubenswrapper[4747]: I1215 05:37:17.642634 4747 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a" exitCode=0 Dec 15 05:37:17 crc kubenswrapper[4747]: I1215 05:37:17.642706 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a"} Dec 15 05:37:17 crc kubenswrapper[4747]: I1215 05:37:17.642745 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"181f2066b6934512d61b655381e5ab5536e822013d12f5c1b95c60a0c8070cc3"} Dec 15 05:37:17 crc kubenswrapper[4747]: I1215 05:37:17.643004 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 15 05:37:17 crc kubenswrapper[4747]: I1215 05:37:17.644074 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:17 crc kubenswrapper[4747]: I1215 05:37:17.644119 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:17 crc kubenswrapper[4747]: I1215 05:37:17.644135 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:17 crc kubenswrapper[4747]: I1215 05:37:17.644687 4747 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d611c6747452e170832a06a690656905b3cba3c778efaafc06fb1ac664b3e9e4" exitCode=0 Dec 15 05:37:17 crc kubenswrapper[4747]: I1215 05:37:17.644779 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d611c6747452e170832a06a690656905b3cba3c778efaafc06fb1ac664b3e9e4"} Dec 15 05:37:17 crc kubenswrapper[4747]: I1215 05:37:17.644846 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c13b3819ccf48a01497609bb04bcb650c79d449fd2277b4ffbb9731b1ce21198"} Dec 15 05:37:17 crc kubenswrapper[4747]: I1215 05:37:17.645064 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 15 05:37:17 crc kubenswrapper[4747]: I1215 05:37:17.645983 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:17 crc kubenswrapper[4747]: I1215 05:37:17.646017 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:17 crc kubenswrapper[4747]: I1215 05:37:17.646039 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:17 crc kubenswrapper[4747]: I1215 05:37:17.646306 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 15 05:37:17 crc kubenswrapper[4747]: I1215 05:37:17.646940 4747 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="ec4f7d4028bf6b15095d8a52e3a5a0faf94db3b63c29f93d93a380bebb963c49" exitCode=0 Dec 15 05:37:17 crc kubenswrapper[4747]: I1215 05:37:17.646982 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"ec4f7d4028bf6b15095d8a52e3a5a0faf94db3b63c29f93d93a380bebb963c49"} Dec 15 05:37:17 crc kubenswrapper[4747]: I1215 05:37:17.647004 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"2b58b93ff63e1ed176e7006f47641febcb7ca5acdb36dcd39c3169ebee2567af"} Dec 15 05:37:17 crc kubenswrapper[4747]: I1215 05:37:17.647091 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 15 05:37:17 crc kubenswrapper[4747]: I1215 05:37:17.647146 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:17 crc kubenswrapper[4747]: I1215 05:37:17.647185 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:17 crc kubenswrapper[4747]: I1215 05:37:17.647195 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:17 crc kubenswrapper[4747]: I1215 05:37:17.648378 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:17 crc kubenswrapper[4747]: I1215 05:37:17.648759 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:17 crc kubenswrapper[4747]: I1215 05:37:17.648785 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:17 crc kubenswrapper[4747]: W1215 05:37:17.669611 4747 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.25.116:6443: connect: connection refused Dec 15 05:37:17 crc kubenswrapper[4747]: E1215 05:37:17.669702 4747 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.25.116:6443: connect: connection refused" logger="UnhandledError" Dec 15 05:37:17 crc kubenswrapper[4747]: W1215 05:37:17.739326 4747 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 192.168.25.116:6443: connect: connection refused Dec 15 05:37:17 crc kubenswrapper[4747]: E1215 05:37:17.739451 4747 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 192.168.25.116:6443: connect: connection refused" logger="UnhandledError" Dec 15 05:37:17 crc kubenswrapper[4747]: E1215 05:37:17.993795 4747 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.116:6443: connect: connection refused" interval="1.6s" Dec 15 05:37:18 crc kubenswrapper[4747]: W1215 05:37:18.143396 4747 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.25.116:6443: connect: connection refused Dec 15 05:37:18 crc kubenswrapper[4747]: E1215 05:37:18.143506 4747 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.25.116:6443: connect: connection refused" logger="UnhandledError" Dec 15 05:37:18 crc kubenswrapper[4747]: W1215 05:37:18.158083 4747 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.25.116:6443: connect: connection refused Dec 15 05:37:18 crc kubenswrapper[4747]: E1215 05:37:18.158180 4747 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.25.116:6443: connect: connection refused" logger="UnhandledError" Dec 15 05:37:18 crc kubenswrapper[4747]: I1215 05:37:18.191706 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 15 05:37:18 crc kubenswrapper[4747]: I1215 05:37:18.193284 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:18 crc kubenswrapper[4747]: I1215 05:37:18.193336 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:18 crc kubenswrapper[4747]: I1215 05:37:18.193350 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:18 crc kubenswrapper[4747]: I1215 05:37:18.193382 4747 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 15 05:37:18 crc kubenswrapper[4747]: E1215 05:37:18.193964 4747 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.25.116:6443: connect: connection refused" node="crc" Dec 15 05:37:18 crc kubenswrapper[4747]: I1215 05:37:18.551758 4747 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 15 05:37:18 crc kubenswrapper[4747]: I1215 05:37:18.585881 4747 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 11:27:14.13812727 +0000 UTC Dec 15 05:37:18 crc kubenswrapper[4747]: I1215 05:37:18.651245 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"23a7828ef5239d7d64be1ee51a07c50e713808269c5ba2c94fde9e3a2470713a"} Dec 15 05:37:18 crc kubenswrapper[4747]: I1215 05:37:18.651300 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"253522075c9e770e558c77f98b826cbb502f402d3806f640dc78d752baf574c6"} Dec 15 05:37:18 crc kubenswrapper[4747]: I1215 05:37:18.651312 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"317d78781624437d29c71b7ca4fa83e80bd20b5f05ab3ed6d1f7177abf0b77d5"} Dec 15 05:37:18 crc kubenswrapper[4747]: I1215 05:37:18.651433 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 15 05:37:18 crc kubenswrapper[4747]: I1215 05:37:18.652228 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:18 crc kubenswrapper[4747]: I1215 05:37:18.652255 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:18 crc kubenswrapper[4747]: I1215 05:37:18.652264 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:18 crc kubenswrapper[4747]: I1215 05:37:18.655277 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"675507d9779ef73f4e25201102562a79a43654bce4a6e3b363f03e0e0b697578"} Dec 15 05:37:18 crc kubenswrapper[4747]: I1215 05:37:18.655555 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0"} Dec 15 05:37:18 crc kubenswrapper[4747]: I1215 05:37:18.655624 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16"} Dec 15 05:37:18 crc kubenswrapper[4747]: I1215 05:37:18.655642 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f"} Dec 15 05:37:18 crc kubenswrapper[4747]: I1215 05:37:18.655660 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4"} Dec 15 05:37:18 crc kubenswrapper[4747]: I1215 05:37:18.656501 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 15 05:37:18 crc kubenswrapper[4747]: I1215 05:37:18.658879 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:18 crc kubenswrapper[4747]: I1215 05:37:18.658913 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:18 crc kubenswrapper[4747]: I1215 05:37:18.658946 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:18 crc kubenswrapper[4747]: I1215 05:37:18.662058 4747 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="bae1eed53c4cf523187d589efd6c88e5e7434da8520f2e8f947ac0ded1a79a8a" exitCode=0 Dec 15 05:37:18 crc kubenswrapper[4747]: I1215 05:37:18.662111 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"bae1eed53c4cf523187d589efd6c88e5e7434da8520f2e8f947ac0ded1a79a8a"} Dec 15 05:37:18 crc kubenswrapper[4747]: I1215 05:37:18.662215 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 15 05:37:18 crc kubenswrapper[4747]: I1215 05:37:18.662744 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:18 crc kubenswrapper[4747]: I1215 05:37:18.662772 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:18 crc kubenswrapper[4747]: I1215 05:37:18.662782 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:18 crc kubenswrapper[4747]: I1215 05:37:18.664190 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"5c45c82ced788dd70d55b8e7aa52c86b1345a6b23f4b1869d80b42b32a0cb7db"} Dec 15 05:37:18 crc kubenswrapper[4747]: I1215 05:37:18.664343 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 15 05:37:18 crc kubenswrapper[4747]: I1215 05:37:18.665174 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:18 crc kubenswrapper[4747]: I1215 05:37:18.665200 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:18 crc kubenswrapper[4747]: I1215 05:37:18.665211 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:18 crc kubenswrapper[4747]: I1215 05:37:18.666172 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"47ff0666091801d67feef4ab5998d6a9c037afa1781db60c2f67046f3ec99a4d"} Dec 15 05:37:18 crc kubenswrapper[4747]: I1215 05:37:18.666218 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"918387852c8b6a10cbef90523b68f21472cb57394fe3107fb6a96ac8e76ada07"} Dec 15 05:37:18 crc kubenswrapper[4747]: I1215 05:37:18.666230 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"61d682a9462fba61e03c438d541888778564c5f9614b20ae3415d06039a1b422"} Dec 15 05:37:18 crc kubenswrapper[4747]: I1215 05:37:18.666330 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 15 05:37:18 crc kubenswrapper[4747]: I1215 05:37:18.667040 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:18 crc kubenswrapper[4747]: I1215 05:37:18.667074 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:18 crc kubenswrapper[4747]: I1215 05:37:18.667087 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:19 crc kubenswrapper[4747]: I1215 05:37:19.055244 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 15 05:37:19 crc kubenswrapper[4747]: I1215 05:37:19.586793 4747 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 19:21:43.983326061 +0000 UTC Dec 15 05:37:19 crc kubenswrapper[4747]: I1215 05:37:19.586861 4747 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 301h44m24.396467424s for next certificate rotation Dec 15 05:37:19 crc kubenswrapper[4747]: I1215 05:37:19.609138 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 15 05:37:19 crc kubenswrapper[4747]: I1215 05:37:19.671331 4747 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="05b4ede1deff8cce04af8da966d035a333c9ba1f6b1800f18aa2a56e5e9c3ca7" exitCode=0 Dec 15 05:37:19 crc kubenswrapper[4747]: I1215 05:37:19.671427 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"05b4ede1deff8cce04af8da966d035a333c9ba1f6b1800f18aa2a56e5e9c3ca7"} Dec 15 05:37:19 crc kubenswrapper[4747]: I1215 05:37:19.671480 4747 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 15 05:37:19 crc kubenswrapper[4747]: I1215 05:37:19.671547 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 15 05:37:19 crc kubenswrapper[4747]: I1215 05:37:19.671593 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 15 05:37:19 crc kubenswrapper[4747]: I1215 05:37:19.671895 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 15 05:37:19 crc kubenswrapper[4747]: I1215 05:37:19.672712 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:19 crc kubenswrapper[4747]: I1215 05:37:19.672751 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:19 crc kubenswrapper[4747]: I1215 05:37:19.672766 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:19 crc kubenswrapper[4747]: I1215 05:37:19.672841 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:19 crc kubenswrapper[4747]: I1215 05:37:19.672861 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:19 crc kubenswrapper[4747]: I1215 05:37:19.672871 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:19 crc kubenswrapper[4747]: I1215 05:37:19.673053 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:19 crc kubenswrapper[4747]: I1215 05:37:19.673077 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:19 crc kubenswrapper[4747]: I1215 05:37:19.673088 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:19 crc kubenswrapper[4747]: I1215 05:37:19.794312 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 15 05:37:19 crc kubenswrapper[4747]: I1215 05:37:19.795350 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:19 crc kubenswrapper[4747]: I1215 05:37:19.795381 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:19 crc kubenswrapper[4747]: I1215 05:37:19.795392 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:19 crc kubenswrapper[4747]: I1215 05:37:19.795414 4747 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 15 05:37:20 crc kubenswrapper[4747]: I1215 05:37:20.678564 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6b9c4bc5a44c96d2498bdaa9c1ca462f39dcdb89cd8df40e47f31fc96d685562"} Dec 15 05:37:20 crc kubenswrapper[4747]: I1215 05:37:20.678632 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a11b192db156f8fcdbc1060b83350d80b7a1a33dbd35a0af08019384e4b2574a"} Dec 15 05:37:20 crc kubenswrapper[4747]: I1215 05:37:20.678649 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c291f4c04f2bba5f7e84accfd5d45951ea6104fb615e8df30e2f47e0514cc268"} Dec 15 05:37:20 crc kubenswrapper[4747]: I1215 05:37:20.678663 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8bbbc86cc03ec6d63e721f27569a456fded46b9f2ddc4f808843f153b2ba9b5d"} Dec 15 05:37:20 crc kubenswrapper[4747]: I1215 05:37:20.678675 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a646158b15b786a1b3196337a7f8e9b60eb779ff6d1fbe6621a8210f834271b8"} Dec 15 05:37:20 crc kubenswrapper[4747]: I1215 05:37:20.678593 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 15 05:37:20 crc kubenswrapper[4747]: I1215 05:37:20.678887 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 15 05:37:20 crc kubenswrapper[4747]: I1215 05:37:20.682845 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:20 crc kubenswrapper[4747]: I1215 05:37:20.682950 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:20 crc kubenswrapper[4747]: I1215 05:37:20.682967 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:20 crc kubenswrapper[4747]: I1215 05:37:20.683086 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:20 crc kubenswrapper[4747]: I1215 05:37:20.683121 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:20 crc kubenswrapper[4747]: I1215 05:37:20.683137 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:22 crc kubenswrapper[4747]: I1215 05:37:22.055679 4747 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 15 05:37:22 crc kubenswrapper[4747]: I1215 05:37:22.056350 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 15 05:37:22 crc kubenswrapper[4747]: I1215 05:37:22.964209 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 15 05:37:22 crc kubenswrapper[4747]: I1215 05:37:22.964384 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 15 05:37:22 crc kubenswrapper[4747]: I1215 05:37:22.965542 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:22 crc kubenswrapper[4747]: I1215 05:37:22.965601 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:22 crc kubenswrapper[4747]: I1215 05:37:22.965612 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:23 crc kubenswrapper[4747]: I1215 05:37:23.062977 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 15 05:37:23 crc kubenswrapper[4747]: I1215 05:37:23.063113 4747 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 15 05:37:23 crc kubenswrapper[4747]: I1215 05:37:23.063145 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 15 05:37:23 crc kubenswrapper[4747]: I1215 05:37:23.063887 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:23 crc kubenswrapper[4747]: I1215 05:37:23.063921 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:23 crc kubenswrapper[4747]: I1215 05:37:23.063957 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:24 crc kubenswrapper[4747]: I1215 05:37:24.216182 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 15 05:37:24 crc kubenswrapper[4747]: I1215 05:37:24.216404 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 15 05:37:24 crc kubenswrapper[4747]: I1215 05:37:24.217742 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:24 crc kubenswrapper[4747]: I1215 05:37:24.217782 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:24 crc kubenswrapper[4747]: I1215 05:37:24.217802 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:25 crc kubenswrapper[4747]: I1215 05:37:25.568035 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 15 05:37:25 crc kubenswrapper[4747]: I1215 05:37:25.568191 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 15 05:37:25 crc kubenswrapper[4747]: I1215 05:37:25.569513 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:25 crc kubenswrapper[4747]: I1215 05:37:25.569558 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:25 crc kubenswrapper[4747]: I1215 05:37:25.569567 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:26 crc kubenswrapper[4747]: I1215 05:37:26.260106 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 15 05:37:26 crc kubenswrapper[4747]: I1215 05:37:26.260262 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 15 05:37:26 crc kubenswrapper[4747]: I1215 05:37:26.261206 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:26 crc kubenswrapper[4747]: I1215 05:37:26.261255 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:26 crc kubenswrapper[4747]: I1215 05:37:26.261271 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:26 crc kubenswrapper[4747]: I1215 05:37:26.617843 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 15 05:37:26 crc kubenswrapper[4747]: I1215 05:37:26.618035 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 15 05:37:26 crc kubenswrapper[4747]: I1215 05:37:26.619323 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:26 crc kubenswrapper[4747]: I1215 05:37:26.619351 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:26 crc kubenswrapper[4747]: I1215 05:37:26.619362 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:26 crc kubenswrapper[4747]: I1215 05:37:26.622224 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 15 05:37:26 crc kubenswrapper[4747]: I1215 05:37:26.692584 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 15 05:37:26 crc kubenswrapper[4747]: E1215 05:37:26.692587 4747 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 15 05:37:26 crc kubenswrapper[4747]: I1215 05:37:26.693392 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:26 crc kubenswrapper[4747]: I1215 05:37:26.693475 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:26 crc kubenswrapper[4747]: I1215 05:37:26.693486 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:28 crc kubenswrapper[4747]: I1215 05:37:28.458618 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 15 05:37:28 crc kubenswrapper[4747]: I1215 05:37:28.458839 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 15 05:37:28 crc kubenswrapper[4747]: I1215 05:37:28.460219 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:28 crc kubenswrapper[4747]: I1215 05:37:28.460270 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:28 crc kubenswrapper[4747]: I1215 05:37:28.460284 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:28 crc kubenswrapper[4747]: I1215 05:37:28.466126 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 15 05:37:28 crc kubenswrapper[4747]: E1215 05:37:28.553973 4747 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 15 05:37:28 crc kubenswrapper[4747]: I1215 05:37:28.585754 4747 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 15 05:37:28 crc kubenswrapper[4747]: I1215 05:37:28.635543 4747 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 15 05:37:28 crc kubenswrapper[4747]: I1215 05:37:28.635612 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 15 05:37:28 crc kubenswrapper[4747]: I1215 05:37:28.639974 4747 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 15 05:37:28 crc kubenswrapper[4747]: I1215 05:37:28.640039 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 15 05:37:28 crc kubenswrapper[4747]: I1215 05:37:28.696494 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 15 05:37:28 crc kubenswrapper[4747]: I1215 05:37:28.697250 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:28 crc kubenswrapper[4747]: I1215 05:37:28.697294 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:28 crc kubenswrapper[4747]: I1215 05:37:28.697306 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:29 crc kubenswrapper[4747]: I1215 05:37:29.616656 4747 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 15 05:37:29 crc kubenswrapper[4747]: [+]log ok Dec 15 05:37:29 crc kubenswrapper[4747]: [+]etcd ok Dec 15 05:37:29 crc kubenswrapper[4747]: [+]poststarthook/openshift.io-api-request-count-filter ok Dec 15 05:37:29 crc kubenswrapper[4747]: [+]poststarthook/openshift.io-startkubeinformers ok Dec 15 05:37:29 crc kubenswrapper[4747]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Dec 15 05:37:29 crc kubenswrapper[4747]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Dec 15 05:37:29 crc kubenswrapper[4747]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 15 05:37:29 crc kubenswrapper[4747]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 15 05:37:29 crc kubenswrapper[4747]: [+]poststarthook/generic-apiserver-start-informers ok Dec 15 05:37:29 crc kubenswrapper[4747]: [+]poststarthook/priority-and-fairness-config-consumer ok Dec 15 05:37:29 crc kubenswrapper[4747]: [+]poststarthook/priority-and-fairness-filter ok Dec 15 05:37:29 crc kubenswrapper[4747]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 15 05:37:29 crc kubenswrapper[4747]: [+]poststarthook/start-apiextensions-informers ok Dec 15 05:37:29 crc kubenswrapper[4747]: [+]poststarthook/start-apiextensions-controllers ok Dec 15 05:37:29 crc kubenswrapper[4747]: [+]poststarthook/crd-informer-synced ok Dec 15 05:37:29 crc kubenswrapper[4747]: [+]poststarthook/start-system-namespaces-controller ok Dec 15 05:37:29 crc kubenswrapper[4747]: [+]poststarthook/start-cluster-authentication-info-controller ok Dec 15 05:37:29 crc kubenswrapper[4747]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Dec 15 05:37:29 crc kubenswrapper[4747]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Dec 15 05:37:29 crc kubenswrapper[4747]: [+]poststarthook/start-legacy-token-tracking-controller ok Dec 15 05:37:29 crc kubenswrapper[4747]: [+]poststarthook/start-service-ip-repair-controllers ok Dec 15 05:37:29 crc kubenswrapper[4747]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Dec 15 05:37:29 crc kubenswrapper[4747]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Dec 15 05:37:29 crc kubenswrapper[4747]: [+]poststarthook/priority-and-fairness-config-producer ok Dec 15 05:37:29 crc kubenswrapper[4747]: [+]poststarthook/bootstrap-controller ok Dec 15 05:37:29 crc kubenswrapper[4747]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Dec 15 05:37:29 crc kubenswrapper[4747]: [+]poststarthook/start-kube-aggregator-informers ok Dec 15 05:37:29 crc kubenswrapper[4747]: [+]poststarthook/apiservice-status-local-available-controller ok Dec 15 05:37:29 crc kubenswrapper[4747]: [+]poststarthook/apiservice-status-remote-available-controller ok Dec 15 05:37:29 crc kubenswrapper[4747]: [+]poststarthook/apiservice-registration-controller ok Dec 15 05:37:29 crc kubenswrapper[4747]: [+]poststarthook/apiservice-wait-for-first-sync ok Dec 15 05:37:29 crc kubenswrapper[4747]: [+]poststarthook/apiservice-discovery-controller ok Dec 15 05:37:29 crc kubenswrapper[4747]: [+]poststarthook/kube-apiserver-autoregistration ok Dec 15 05:37:29 crc kubenswrapper[4747]: [+]autoregister-completion ok Dec 15 05:37:29 crc kubenswrapper[4747]: [+]poststarthook/apiservice-openapi-controller ok Dec 15 05:37:29 crc kubenswrapper[4747]: [+]poststarthook/apiservice-openapiv3-controller ok Dec 15 05:37:29 crc kubenswrapper[4747]: livez check failed Dec 15 05:37:29 crc kubenswrapper[4747]: I1215 05:37:29.616712 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 15 05:37:30 crc kubenswrapper[4747]: I1215 05:37:30.460136 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 15 05:37:30 crc kubenswrapper[4747]: I1215 05:37:30.460368 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 15 05:37:30 crc kubenswrapper[4747]: I1215 05:37:30.461653 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:30 crc kubenswrapper[4747]: I1215 05:37:30.461703 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:30 crc kubenswrapper[4747]: I1215 05:37:30.461716 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:30 crc kubenswrapper[4747]: I1215 05:37:30.484145 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 15 05:37:30 crc kubenswrapper[4747]: I1215 05:37:30.701876 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 15 05:37:30 crc kubenswrapper[4747]: I1215 05:37:30.702712 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:30 crc kubenswrapper[4747]: I1215 05:37:30.702775 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:30 crc kubenswrapper[4747]: I1215 05:37:30.702790 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:30 crc kubenswrapper[4747]: I1215 05:37:30.712752 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 15 05:37:31 crc kubenswrapper[4747]: I1215 05:37:31.703046 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 15 05:37:31 crc kubenswrapper[4747]: I1215 05:37:31.703680 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:31 crc kubenswrapper[4747]: I1215 05:37:31.703710 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:31 crc kubenswrapper[4747]: I1215 05:37:31.703719 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:32 crc kubenswrapper[4747]: I1215 05:37:32.056710 4747 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 15 05:37:32 crc kubenswrapper[4747]: I1215 05:37:32.056787 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 15 05:37:32 crc kubenswrapper[4747]: I1215 05:37:32.565862 4747 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 15 05:37:32 crc kubenswrapper[4747]: I1215 05:37:32.578123 4747 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Dec 15 05:37:33 crc kubenswrapper[4747]: E1215 05:37:33.634432 4747 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Dec 15 05:37:33 crc kubenswrapper[4747]: I1215 05:37:33.635879 4747 trace.go:236] Trace[1871377301]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (15-Dec-2025 05:37:20.513) (total time: 13122ms): Dec 15 05:37:33 crc kubenswrapper[4747]: Trace[1871377301]: ---"Objects listed" error: 13122ms (05:37:33.635) Dec 15 05:37:33 crc kubenswrapper[4747]: Trace[1871377301]: [13.122657005s] [13.122657005s] END Dec 15 05:37:33 crc kubenswrapper[4747]: I1215 05:37:33.635904 4747 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 15 05:37:33 crc kubenswrapper[4747]: I1215 05:37:33.635882 4747 trace.go:236] Trace[18213551]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (15-Dec-2025 05:37:19.525) (total time: 14109ms): Dec 15 05:37:33 crc kubenswrapper[4747]: Trace[18213551]: ---"Objects listed" error: 14109ms (05:37:33.635) Dec 15 05:37:33 crc kubenswrapper[4747]: Trace[18213551]: [14.109943497s] [14.109943497s] END Dec 15 05:37:33 crc kubenswrapper[4747]: I1215 05:37:33.635997 4747 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 15 05:37:33 crc kubenswrapper[4747]: I1215 05:37:33.636704 4747 trace.go:236] Trace[72585786]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (15-Dec-2025 05:37:20.249) (total time: 13387ms): Dec 15 05:37:33 crc kubenswrapper[4747]: Trace[72585786]: ---"Objects listed" error: 13387ms (05:37:33.636) Dec 15 05:37:33 crc kubenswrapper[4747]: Trace[72585786]: [13.387110415s] [13.387110415s] END Dec 15 05:37:33 crc kubenswrapper[4747]: I1215 05:37:33.636742 4747 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 15 05:37:33 crc kubenswrapper[4747]: I1215 05:37:33.637677 4747 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 15 05:37:33 crc kubenswrapper[4747]: I1215 05:37:33.638667 4747 trace.go:236] Trace[19274458]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (15-Dec-2025 05:37:20.125) (total time: 13513ms): Dec 15 05:37:33 crc kubenswrapper[4747]: Trace[19274458]: ---"Objects listed" error: 13513ms (05:37:33.638) Dec 15 05:37:33 crc kubenswrapper[4747]: Trace[19274458]: [13.513562614s] [13.513562614s] END Dec 15 05:37:33 crc kubenswrapper[4747]: I1215 05:37:33.638689 4747 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 15 05:37:33 crc kubenswrapper[4747]: E1215 05:37:33.639835 4747 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Dec 15 05:37:33 crc kubenswrapper[4747]: I1215 05:37:33.742434 4747 csr.go:261] certificate signing request csr-sv2rt is approved, waiting to be issued Dec 15 05:37:33 crc kubenswrapper[4747]: I1215 05:37:33.747301 4747 csr.go:257] certificate signing request csr-sv2rt is issued Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.587643 4747 apiserver.go:52] "Watching apiserver" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.590433 4747 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.590992 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-dns/node-resolver-p2w9d","openshift-image-registry/node-ca-cltgw","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.591536 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.591577 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:37:34 crc kubenswrapper[4747]: E1215 05:37:34.591675 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.591755 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:37:34 crc kubenswrapper[4747]: E1215 05:37:34.591943 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.592022 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.592217 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.593628 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.593737 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-cltgw" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.593777 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.594039 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.594107 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:37:34 crc kubenswrapper[4747]: E1215 05:37:34.594171 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.594248 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-p2w9d" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.594655 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.596648 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.596669 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.597551 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.597559 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.598647 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.598843 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.598859 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.598993 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.599029 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.599106 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.599115 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.600995 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.613916 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.619288 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.619776 4747 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.619896 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.623841 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.625990 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.633577 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.638630 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cltgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3c83e90-bb8c-4909-9633-8f59ca12db6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg2m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cltgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.644823 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p2w9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abdca76-2fcd-44fc-a09d-ded3084306d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fw7nq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p2w9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.654991 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.661562 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.668102 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.674661 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.680865 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.689352 4747 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.711399 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.712993 4747 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="675507d9779ef73f4e25201102562a79a43654bce4a6e3b363f03e0e0b697578" exitCode=255 Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.713038 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"675507d9779ef73f4e25201102562a79a43654bce4a6e3b363f03e0e0b697578"} Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.742520 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.742563 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.742585 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.742602 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.742684 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.742706 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.742724 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.742742 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.742762 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.742778 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.742809 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.742829 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.742846 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.742865 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.742882 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.742901 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.742918 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.742951 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.742967 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.742984 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743001 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743016 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743032 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743054 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743083 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743101 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743118 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743134 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743163 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743179 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743197 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743218 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743236 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743252 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743272 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743292 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743344 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743367 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743391 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743409 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743428 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743446 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743463 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743480 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743481 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743497 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743519 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743536 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743531 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743552 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743599 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743621 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743626 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743644 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743661 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743678 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743696 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743711 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743726 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743741 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743755 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743772 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743790 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743816 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743830 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743847 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743862 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743877 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743895 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743912 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743946 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743971 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743988 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744002 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744017 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744033 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744050 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744065 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744081 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744098 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744114 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744128 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744144 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744160 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744178 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744195 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744212 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744227 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744242 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744257 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744274 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744291 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744305 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744322 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744360 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744377 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744394 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744410 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744436 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744452 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744468 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744483 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744501 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744517 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744533 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744548 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744562 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744578 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744591 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744605 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744622 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744637 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744653 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744667 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744687 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744704 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744720 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744735 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744753 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744770 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744787 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744812 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744828 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744845 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744863 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744881 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744899 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744915 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744945 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744963 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744981 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744997 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745013 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745029 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745047 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745063 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745079 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745096 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745112 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745127 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745143 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745160 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745180 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745196 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745211 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745226 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745248 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745265 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745281 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745299 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745315 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745332 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745360 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745378 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745395 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745413 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745432 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745449 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745466 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745483 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745499 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745515 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745533 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745550 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745566 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745585 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745603 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745620 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745637 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745652 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745668 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745684 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745699 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745714 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.746225 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.746251 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.746270 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.746292 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.746307 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.746323 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.746338 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.746356 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.746370 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.746387 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.746404 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.746420 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.746438 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.746455 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.746474 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.746491 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.746511 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.746528 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.746544 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.746561 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.746578 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.746595 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.746611 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.746628 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.746643 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.746663 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.746709 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.746728 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.746747 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.746765 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.746780 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.746808 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.746837 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.746859 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.746876 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b3c83e90-bb8c-4909-9633-8f59ca12db6f-host\") pod \"node-ca-cltgw\" (UID: \"b3c83e90-bb8c-4909-9633-8f59ca12db6f\") " pod="openshift-image-registry/node-ca-cltgw" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.746892 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b3c83e90-bb8c-4909-9633-8f59ca12db6f-serviceca\") pod \"node-ca-cltgw\" (UID: \"b3c83e90-bb8c-4909-9633-8f59ca12db6f\") " pod="openshift-image-registry/node-ca-cltgw" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.746910 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.746942 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.746959 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw7nq\" (UniqueName: \"kubernetes.io/projected/1abdca76-2fcd-44fc-a09d-ded3084306d7-kube-api-access-fw7nq\") pod \"node-resolver-p2w9d\" (UID: \"1abdca76-2fcd-44fc-a09d-ded3084306d7\") " pod="openshift-dns/node-resolver-p2w9d" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.746979 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.746994 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.747012 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.747032 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.747049 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1abdca76-2fcd-44fc-a09d-ded3084306d7-hosts-file\") pod \"node-resolver-p2w9d\" (UID: \"1abdca76-2fcd-44fc-a09d-ded3084306d7\") " pod="openshift-dns/node-resolver-p2w9d" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.747067 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg2m9\" (UniqueName: \"kubernetes.io/projected/b3c83e90-bb8c-4909-9633-8f59ca12db6f-kube-api-access-sg2m9\") pod \"node-ca-cltgw\" (UID: \"b3c83e90-bb8c-4909-9633-8f59ca12db6f\") " pod="openshift-image-registry/node-ca-cltgw" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.747106 4747 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.747118 4747 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.747128 4747 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.747137 4747 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.747146 4747 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.755640 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.758556 4747 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.759368 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.772635 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743751 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743773 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.743919 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744073 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744225 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744247 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744339 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744371 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744458 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744603 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744713 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.744737 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745020 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745041 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745129 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745199 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745246 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745299 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745332 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745511 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745719 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.745987 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.746160 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.746201 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.746439 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.746474 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.746660 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.773233 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.747050 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.747103 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.747236 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.747264 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.747305 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: E1215 05:37:34.747352 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-15 05:37:35.247334904 +0000 UTC m=+18.943846821 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.747374 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.747530 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.747535 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.747554 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.747647 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.747706 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.747713 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.747732 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.747839 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.747892 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.748193 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.748454 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.748517 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.748541 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.748691 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.748718 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.748825 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.749057 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.749299 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.749350 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.749389 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.749560 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.749722 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.749753 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.749822 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.749943 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.749979 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.750049 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.750058 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.750209 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.748126 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.750651 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.750732 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.754914 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.755024 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.755177 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.755150 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.755216 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.755215 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.755263 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.755358 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.755403 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.755562 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.755602 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.756032 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.756044 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.756179 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.756304 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.756311 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.756321 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.750770 4747 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-12-15 05:32:33 +0000 UTC, rotation deadline is 2026-10-23 11:33:39.589287713 +0000 UTC Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.756366 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.756375 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.756531 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.756549 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.756697 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.756713 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.756723 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.756866 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.756975 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.757192 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.757200 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.757335 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.757437 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.757488 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.757551 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.757691 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.757858 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.758087 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.758394 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.758860 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.759170 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.759560 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.759607 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.759659 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.759978 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: E1215 05:37:34.760082 4747 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.760217 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.760835 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: E1215 05:37:34.764493 4747 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.771616 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.771776 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.772021 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.772411 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.772719 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.773263 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.773470 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.773583 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.773892 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.774009 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.774247 4747 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7493h56m4.815046346s for next certificate rotation Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.774251 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.774328 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: E1215 05:37:34.774437 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-15 05:37:35.274414932 +0000 UTC m=+18.970926850 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.774508 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.774559 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.774910 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.774976 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.775058 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.775219 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.775269 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: E1215 05:37:34.775386 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-15 05:37:35.275374797 +0000 UTC m=+18.971886714 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.775442 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.775483 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.775687 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: E1215 05:37:34.776362 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 15 05:37:34 crc kubenswrapper[4747]: E1215 05:37:34.776386 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 15 05:37:34 crc kubenswrapper[4747]: E1215 05:37:34.776397 4747 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 15 05:37:34 crc kubenswrapper[4747]: E1215 05:37:34.776457 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-15 05:37:35.276439089 +0000 UTC m=+18.972951005 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.776589 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.776847 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.777060 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.777380 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.777720 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.777785 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.778139 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.778361 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.778437 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.778391 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: E1215 05:37:34.778610 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 15 05:37:34 crc kubenswrapper[4747]: E1215 05:37:34.778643 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 15 05:37:34 crc kubenswrapper[4747]: E1215 05:37:34.778657 4747 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 15 05:37:34 crc kubenswrapper[4747]: E1215 05:37:34.778703 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-15 05:37:35.278691874 +0000 UTC m=+18.975203792 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.778754 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.778808 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.778847 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.780027 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.780162 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.780299 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.780534 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.780811 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.781120 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.781402 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.781499 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.780215 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.782000 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.782082 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.782138 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.782361 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.782369 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.782725 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.782856 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.783118 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.783337 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.783401 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.783534 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.783623 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.783810 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.784867 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.785109 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.785351 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.787505 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.787988 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.788102 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.788244 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.788367 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.788583 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.791066 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.792445 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.793138 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.794090 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.796477 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.796961 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.798015 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.798336 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.799809 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.800483 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.801993 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.802120 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.802260 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.805009 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.815673 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.819326 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.832358 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.834601 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-nldtn"] Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.835390 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-gmfps"] Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.837143 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-pc5tw"] Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.837585 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-pc5tw" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.837880 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.838885 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-gmfps" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.841166 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.842136 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.844590 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.844750 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.844801 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.844997 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.845077 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.845119 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.845166 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.845202 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.845249 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.845282 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.845400 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.847736 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b3c83e90-bb8c-4909-9633-8f59ca12db6f-host\") pod \"node-ca-cltgw\" (UID: \"b3c83e90-bb8c-4909-9633-8f59ca12db6f\") " pod="openshift-image-registry/node-ca-cltgw" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.847765 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b3c83e90-bb8c-4909-9633-8f59ca12db6f-serviceca\") pod \"node-ca-cltgw\" (UID: \"b3c83e90-bb8c-4909-9633-8f59ca12db6f\") " pod="openshift-image-registry/node-ca-cltgw" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.847788 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw7nq\" (UniqueName: \"kubernetes.io/projected/1abdca76-2fcd-44fc-a09d-ded3084306d7-kube-api-access-fw7nq\") pod \"node-resolver-p2w9d\" (UID: \"1abdca76-2fcd-44fc-a09d-ded3084306d7\") " pod="openshift-dns/node-resolver-p2w9d" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.847811 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.847842 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg2m9\" (UniqueName: \"kubernetes.io/projected/b3c83e90-bb8c-4909-9633-8f59ca12db6f-kube-api-access-sg2m9\") pod \"node-ca-cltgw\" (UID: \"b3c83e90-bb8c-4909-9633-8f59ca12db6f\") " pod="openshift-image-registry/node-ca-cltgw" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.847860 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1abdca76-2fcd-44fc-a09d-ded3084306d7-hosts-file\") pod \"node-resolver-p2w9d\" (UID: \"1abdca76-2fcd-44fc-a09d-ded3084306d7\") " pod="openshift-dns/node-resolver-p2w9d" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.847890 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.847948 4747 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.847960 4747 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.847969 4747 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.847977 4747 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.847986 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.847996 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848003 4747 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848016 4747 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848044 4747 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848053 4747 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848061 4747 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848069 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848089 4747 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848098 4747 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848106 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848115 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848125 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848134 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848142 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848151 4747 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848159 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848185 4747 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848196 4747 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848219 4747 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848228 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848236 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848245 4747 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848254 4747 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848261 4747 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848272 4747 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848280 4747 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848288 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848298 4747 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848307 4747 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848315 4747 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848323 4747 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848344 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848352 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848360 4747 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848368 4747 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848375 4747 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848383 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848392 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848401 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848412 4747 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848419 4747 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848427 4747 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848436 4747 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848444 4747 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848452 4747 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848460 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848468 4747 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848476 4747 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848485 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848494 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848501 4747 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848510 4747 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848517 4747 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848525 4747 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848533 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848540 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848548 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848556 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848565 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848574 4747 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848586 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848595 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848602 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848610 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848618 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848625 4747 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848633 4747 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848642 4747 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848651 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848659 4747 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848667 4747 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848674 4747 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848683 4747 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848691 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848700 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848708 4747 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848717 4747 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848725 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848733 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848740 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848748 4747 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848755 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848764 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848772 4747 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848779 4747 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848787 4747 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848801 4747 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848809 4747 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848816 4747 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848826 4747 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848837 4747 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848862 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b3c83e90-bb8c-4909-9633-8f59ca12db6f-host\") pod \"node-ca-cltgw\" (UID: \"b3c83e90-bb8c-4909-9633-8f59ca12db6f\") " pod="openshift-image-registry/node-ca-cltgw" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.848915 4747 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.849061 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.849084 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.849261 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1abdca76-2fcd-44fc-a09d-ded3084306d7-hosts-file\") pod \"node-resolver-p2w9d\" (UID: \"1abdca76-2fcd-44fc-a09d-ded3084306d7\") " pod="openshift-dns/node-resolver-p2w9d" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850239 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b3c83e90-bb8c-4909-9633-8f59ca12db6f-serviceca\") pod \"node-ca-cltgw\" (UID: \"b3c83e90-bb8c-4909-9633-8f59ca12db6f\") " pod="openshift-image-registry/node-ca-cltgw" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850283 4747 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850297 4747 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850307 4747 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850318 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850346 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850356 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850364 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850373 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850382 4747 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850392 4747 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850401 4747 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850425 4747 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850435 4747 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850444 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850454 4747 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850462 4747 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850470 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850479 4747 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850504 4747 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850515 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850527 4747 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850535 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850544 4747 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850553 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850580 4747 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850589 4747 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850598 4747 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850606 4747 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850614 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850623 4747 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850631 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850639 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850664 4747 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850673 4747 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850684 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850693 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850701 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850709 4747 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850734 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850743 4747 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850751 4747 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850760 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850768 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850776 4747 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850784 4747 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850792 4747 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850828 4747 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850835 4747 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850844 4747 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850852 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850860 4747 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850868 4747 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850879 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850902 4747 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850910 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.850918 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.851027 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.851037 4747 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.851046 4747 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.851054 4747 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.851062 4747 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.851070 4747 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.851078 4747 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.851103 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.851114 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.851124 4747 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.851132 4747 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.851140 4747 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.851149 4747 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.851158 4747 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.851181 4747 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.851189 4747 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.851198 4747 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.851205 4747 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.851213 4747 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.851221 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.851228 4747 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.851237 4747 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.851267 4747 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.851275 4747 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.851283 4747 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.851291 4747 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.851301 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.851308 4747 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.851318 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.851343 4747 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.851352 4747 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.851360 4747 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.851368 4747 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.851376 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.851384 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.856397 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.864722 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.866879 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.867177 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw7nq\" (UniqueName: \"kubernetes.io/projected/1abdca76-2fcd-44fc-a09d-ded3084306d7-kube-api-access-fw7nq\") pod \"node-resolver-p2w9d\" (UID: \"1abdca76-2fcd-44fc-a09d-ded3084306d7\") " pod="openshift-dns/node-resolver-p2w9d" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.868130 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg2m9\" (UniqueName: \"kubernetes.io/projected/b3c83e90-bb8c-4909-9633-8f59ca12db6f-kube-api-access-sg2m9\") pod \"node-ca-cltgw\" (UID: \"b3c83e90-bb8c-4909-9633-8f59ca12db6f\") " pod="openshift-image-registry/node-ca-cltgw" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.884461 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pc5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pc5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.892997 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d50e5c9-7ce9-40c0-b942-01031654d27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nldtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.901160 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31db1d28-81c8-4eae-989c-49168bd4e711\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675507d9779ef73f4e25201102562a79a43654bce4a6e3b363f03e0e0b697578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.907701 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.914573 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.918141 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.922627 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.924388 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.928190 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-cltgw" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.933248 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-p2w9d" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.935865 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cltgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3c83e90-bb8c-4909-9633-8f59ca12db6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg2m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cltgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.943806 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p2w9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abdca76-2fcd-44fc-a09d-ded3084306d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fw7nq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p2w9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.951650 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/89350c5d-9a77-499e-81ec-376b012cc219-host-run-k8s-cni-cncf-io\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.951686 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/89350c5d-9a77-499e-81ec-376b012cc219-host-var-lib-kubelet\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.951720 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0b19a93a-5d3a-44c6-b207-8e4ee3be6c20-cni-binary-copy\") pod \"multus-additional-cni-plugins-pc5tw\" (UID: \"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\") " pod="openshift-multus/multus-additional-cni-plugins-pc5tw" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.951740 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0b19a93a-5d3a-44c6-b207-8e4ee3be6c20-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-pc5tw\" (UID: \"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\") " pod="openshift-multus/multus-additional-cni-plugins-pc5tw" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.951764 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/89350c5d-9a77-499e-81ec-376b012cc219-host-run-multus-certs\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.951783 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/1d50e5c9-7ce9-40c0-b942-01031654d27c-rootfs\") pod \"machine-config-daemon-nldtn\" (UID: \"1d50e5c9-7ce9-40c0-b942-01031654d27c\") " pod="openshift-machine-config-operator/machine-config-daemon-nldtn" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.951814 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1d50e5c9-7ce9-40c0-b942-01031654d27c-proxy-tls\") pod \"machine-config-daemon-nldtn\" (UID: \"1d50e5c9-7ce9-40c0-b942-01031654d27c\") " pod="openshift-machine-config-operator/machine-config-daemon-nldtn" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.951833 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/89350c5d-9a77-499e-81ec-376b012cc219-multus-socket-dir-parent\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.951847 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/89350c5d-9a77-499e-81ec-376b012cc219-host-var-lib-cni-multus\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.951865 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/89350c5d-9a77-499e-81ec-376b012cc219-cnibin\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.951880 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/89350c5d-9a77-499e-81ec-376b012cc219-cni-binary-copy\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.951895 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/89350c5d-9a77-499e-81ec-376b012cc219-host-run-netns\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.951908 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/89350c5d-9a77-499e-81ec-376b012cc219-hostroot\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.951939 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1d50e5c9-7ce9-40c0-b942-01031654d27c-mcd-auth-proxy-config\") pod \"machine-config-daemon-nldtn\" (UID: \"1d50e5c9-7ce9-40c0-b942-01031654d27c\") " pod="openshift-machine-config-operator/machine-config-daemon-nldtn" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.951961 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpldx\" (UniqueName: \"kubernetes.io/projected/1d50e5c9-7ce9-40c0-b942-01031654d27c-kube-api-access-wpldx\") pod \"machine-config-daemon-nldtn\" (UID: \"1d50e5c9-7ce9-40c0-b942-01031654d27c\") " pod="openshift-machine-config-operator/machine-config-daemon-nldtn" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.951975 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/89350c5d-9a77-499e-81ec-376b012cc219-os-release\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.951989 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/89350c5d-9a77-499e-81ec-376b012cc219-multus-conf-dir\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.952001 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/89350c5d-9a77-499e-81ec-376b012cc219-etc-kubernetes\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.952020 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/89350c5d-9a77-499e-81ec-376b012cc219-multus-cni-dir\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.952037 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0b19a93a-5d3a-44c6-b207-8e4ee3be6c20-cnibin\") pod \"multus-additional-cni-plugins-pc5tw\" (UID: \"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\") " pod="openshift-multus/multus-additional-cni-plugins-pc5tw" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.952058 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0b19a93a-5d3a-44c6-b207-8e4ee3be6c20-os-release\") pod \"multus-additional-cni-plugins-pc5tw\" (UID: \"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\") " pod="openshift-multus/multus-additional-cni-plugins-pc5tw" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.952071 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/89350c5d-9a77-499e-81ec-376b012cc219-multus-daemon-config\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.952096 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/89350c5d-9a77-499e-81ec-376b012cc219-host-var-lib-cni-bin\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.952111 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfkwt\" (UniqueName: \"kubernetes.io/projected/0b19a93a-5d3a-44c6-b207-8e4ee3be6c20-kube-api-access-pfkwt\") pod \"multus-additional-cni-plugins-pc5tw\" (UID: \"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\") " pod="openshift-multus/multus-additional-cni-plugins-pc5tw" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.952126 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0b19a93a-5d3a-44c6-b207-8e4ee3be6c20-system-cni-dir\") pod \"multus-additional-cni-plugins-pc5tw\" (UID: \"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\") " pod="openshift-multus/multus-additional-cni-plugins-pc5tw" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.952140 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbpgt\" (UniqueName: \"kubernetes.io/projected/89350c5d-9a77-499e-81ec-376b012cc219-kube-api-access-jbpgt\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.952160 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0b19a93a-5d3a-44c6-b207-8e4ee3be6c20-tuning-conf-dir\") pod \"multus-additional-cni-plugins-pc5tw\" (UID: \"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\") " pod="openshift-multus/multus-additional-cni-plugins-pc5tw" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.952174 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/89350c5d-9a77-499e-81ec-376b012cc219-system-cni-dir\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.952196 4747 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.957229 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmfps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89350c5d-9a77-499e-81ec-376b012cc219\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbpgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmfps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.964588 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.971011 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.983992 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31db1d28-81c8-4eae-989c-49168bd4e711\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675507d9779ef73f4e25201102562a79a43654bce4a6e3b363f03e0e0b697578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 15 05:37:34 crc kubenswrapper[4747]: I1215 05:37:34.997635 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.006752 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.013718 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cltgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3c83e90-bb8c-4909-9633-8f59ca12db6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg2m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cltgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.019181 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p2w9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abdca76-2fcd-44fc-a09d-ded3084306d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fw7nq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p2w9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.028310 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmfps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89350c5d-9a77-499e-81ec-376b012cc219\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbpgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmfps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.037473 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.051697 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.052847 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/89350c5d-9a77-499e-81ec-376b012cc219-host-var-lib-cni-bin\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.052880 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0b19a93a-5d3a-44c6-b207-8e4ee3be6c20-system-cni-dir\") pod \"multus-additional-cni-plugins-pc5tw\" (UID: \"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\") " pod="openshift-multus/multus-additional-cni-plugins-pc5tw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.052900 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfkwt\" (UniqueName: \"kubernetes.io/projected/0b19a93a-5d3a-44c6-b207-8e4ee3be6c20-kube-api-access-pfkwt\") pod \"multus-additional-cni-plugins-pc5tw\" (UID: \"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\") " pod="openshift-multus/multus-additional-cni-plugins-pc5tw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.052941 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0b19a93a-5d3a-44c6-b207-8e4ee3be6c20-tuning-conf-dir\") pod \"multus-additional-cni-plugins-pc5tw\" (UID: \"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\") " pod="openshift-multus/multus-additional-cni-plugins-pc5tw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.052959 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbpgt\" (UniqueName: \"kubernetes.io/projected/89350c5d-9a77-499e-81ec-376b012cc219-kube-api-access-jbpgt\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.052977 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/89350c5d-9a77-499e-81ec-376b012cc219-system-cni-dir\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.052993 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/89350c5d-9a77-499e-81ec-376b012cc219-host-run-k8s-cni-cncf-io\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.053006 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/89350c5d-9a77-499e-81ec-376b012cc219-host-var-lib-kubelet\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.053020 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0b19a93a-5d3a-44c6-b207-8e4ee3be6c20-cni-binary-copy\") pod \"multus-additional-cni-plugins-pc5tw\" (UID: \"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\") " pod="openshift-multus/multus-additional-cni-plugins-pc5tw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.053035 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0b19a93a-5d3a-44c6-b207-8e4ee3be6c20-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-pc5tw\" (UID: \"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\") " pod="openshift-multus/multus-additional-cni-plugins-pc5tw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.053061 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/89350c5d-9a77-499e-81ec-376b012cc219-host-run-multus-certs\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.053076 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/1d50e5c9-7ce9-40c0-b942-01031654d27c-rootfs\") pod \"machine-config-daemon-nldtn\" (UID: \"1d50e5c9-7ce9-40c0-b942-01031654d27c\") " pod="openshift-machine-config-operator/machine-config-daemon-nldtn" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.053097 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1d50e5c9-7ce9-40c0-b942-01031654d27c-proxy-tls\") pod \"machine-config-daemon-nldtn\" (UID: \"1d50e5c9-7ce9-40c0-b942-01031654d27c\") " pod="openshift-machine-config-operator/machine-config-daemon-nldtn" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.053116 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/89350c5d-9a77-499e-81ec-376b012cc219-multus-socket-dir-parent\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.053131 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/89350c5d-9a77-499e-81ec-376b012cc219-host-var-lib-cni-multus\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.053146 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1d50e5c9-7ce9-40c0-b942-01031654d27c-mcd-auth-proxy-config\") pod \"machine-config-daemon-nldtn\" (UID: \"1d50e5c9-7ce9-40c0-b942-01031654d27c\") " pod="openshift-machine-config-operator/machine-config-daemon-nldtn" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.053163 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpldx\" (UniqueName: \"kubernetes.io/projected/1d50e5c9-7ce9-40c0-b942-01031654d27c-kube-api-access-wpldx\") pod \"machine-config-daemon-nldtn\" (UID: \"1d50e5c9-7ce9-40c0-b942-01031654d27c\") " pod="openshift-machine-config-operator/machine-config-daemon-nldtn" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.053178 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/89350c5d-9a77-499e-81ec-376b012cc219-cnibin\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.053192 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/89350c5d-9a77-499e-81ec-376b012cc219-cni-binary-copy\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.053205 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/89350c5d-9a77-499e-81ec-376b012cc219-host-run-netns\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.053220 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/89350c5d-9a77-499e-81ec-376b012cc219-hostroot\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.053236 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/89350c5d-9a77-499e-81ec-376b012cc219-multus-cni-dir\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.053252 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/89350c5d-9a77-499e-81ec-376b012cc219-os-release\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.053265 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/89350c5d-9a77-499e-81ec-376b012cc219-multus-conf-dir\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.053281 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/89350c5d-9a77-499e-81ec-376b012cc219-etc-kubernetes\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.053296 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0b19a93a-5d3a-44c6-b207-8e4ee3be6c20-cnibin\") pod \"multus-additional-cni-plugins-pc5tw\" (UID: \"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\") " pod="openshift-multus/multus-additional-cni-plugins-pc5tw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.053316 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0b19a93a-5d3a-44c6-b207-8e4ee3be6c20-os-release\") pod \"multus-additional-cni-plugins-pc5tw\" (UID: \"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\") " pod="openshift-multus/multus-additional-cni-plugins-pc5tw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.053329 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/89350c5d-9a77-499e-81ec-376b012cc219-multus-daemon-config\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.053697 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/89350c5d-9a77-499e-81ec-376b012cc219-multus-socket-dir-parent\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.053762 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/89350c5d-9a77-499e-81ec-376b012cc219-host-var-lib-cni-bin\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.053790 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0b19a93a-5d3a-44c6-b207-8e4ee3be6c20-system-cni-dir\") pod \"multus-additional-cni-plugins-pc5tw\" (UID: \"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\") " pod="openshift-multus/multus-additional-cni-plugins-pc5tw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.053882 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/89350c5d-9a77-499e-81ec-376b012cc219-multus-daemon-config\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.053971 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/89350c5d-9a77-499e-81ec-376b012cc219-host-var-lib-cni-multus\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.054406 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1d50e5c9-7ce9-40c0-b942-01031654d27c-mcd-auth-proxy-config\") pod \"machine-config-daemon-nldtn\" (UID: \"1d50e5c9-7ce9-40c0-b942-01031654d27c\") " pod="openshift-machine-config-operator/machine-config-daemon-nldtn" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.054443 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0b19a93a-5d3a-44c6-b207-8e4ee3be6c20-tuning-conf-dir\") pod \"multus-additional-cni-plugins-pc5tw\" (UID: \"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\") " pod="openshift-multus/multus-additional-cni-plugins-pc5tw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.054583 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/89350c5d-9a77-499e-81ec-376b012cc219-system-cni-dir\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.054628 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/89350c5d-9a77-499e-81ec-376b012cc219-host-run-k8s-cni-cncf-io\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.054648 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/89350c5d-9a77-499e-81ec-376b012cc219-cnibin\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.054651 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/89350c5d-9a77-499e-81ec-376b012cc219-host-var-lib-kubelet\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.055183 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0b19a93a-5d3a-44c6-b207-8e4ee3be6c20-cni-binary-copy\") pod \"multus-additional-cni-plugins-pc5tw\" (UID: \"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\") " pod="openshift-multus/multus-additional-cni-plugins-pc5tw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.055573 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0b19a93a-5d3a-44c6-b207-8e4ee3be6c20-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-pc5tw\" (UID: \"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\") " pod="openshift-multus/multus-additional-cni-plugins-pc5tw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.055608 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/89350c5d-9a77-499e-81ec-376b012cc219-host-run-multus-certs\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.055633 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/1d50e5c9-7ce9-40c0-b942-01031654d27c-rootfs\") pod \"machine-config-daemon-nldtn\" (UID: \"1d50e5c9-7ce9-40c0-b942-01031654d27c\") " pod="openshift-machine-config-operator/machine-config-daemon-nldtn" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.056102 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/89350c5d-9a77-499e-81ec-376b012cc219-cni-binary-copy\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.057009 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/89350c5d-9a77-499e-81ec-376b012cc219-host-run-netns\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.057060 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/89350c5d-9a77-499e-81ec-376b012cc219-hostroot\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.057107 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/89350c5d-9a77-499e-81ec-376b012cc219-multus-cni-dir\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.057147 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/89350c5d-9a77-499e-81ec-376b012cc219-os-release\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.057186 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/89350c5d-9a77-499e-81ec-376b012cc219-multus-conf-dir\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.057210 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/89350c5d-9a77-499e-81ec-376b012cc219-etc-kubernetes\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.057236 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0b19a93a-5d3a-44c6-b207-8e4ee3be6c20-cnibin\") pod \"multus-additional-cni-plugins-pc5tw\" (UID: \"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\") " pod="openshift-multus/multus-additional-cni-plugins-pc5tw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.057269 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0b19a93a-5d3a-44c6-b207-8e4ee3be6c20-os-release\") pod \"multus-additional-cni-plugins-pc5tw\" (UID: \"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\") " pod="openshift-multus/multus-additional-cni-plugins-pc5tw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.061723 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1d50e5c9-7ce9-40c0-b942-01031654d27c-proxy-tls\") pod \"machine-config-daemon-nldtn\" (UID: \"1d50e5c9-7ce9-40c0-b942-01031654d27c\") " pod="openshift-machine-config-operator/machine-config-daemon-nldtn" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.065894 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.068120 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfkwt\" (UniqueName: \"kubernetes.io/projected/0b19a93a-5d3a-44c6-b207-8e4ee3be6c20-kube-api-access-pfkwt\") pod \"multus-additional-cni-plugins-pc5tw\" (UID: \"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\") " pod="openshift-multus/multus-additional-cni-plugins-pc5tw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.071723 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbpgt\" (UniqueName: \"kubernetes.io/projected/89350c5d-9a77-499e-81ec-376b012cc219-kube-api-access-jbpgt\") pod \"multus-gmfps\" (UID: \"89350c5d-9a77-499e-81ec-376b012cc219\") " pod="openshift-multus/multus-gmfps" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.074359 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpldx\" (UniqueName: \"kubernetes.io/projected/1d50e5c9-7ce9-40c0-b942-01031654d27c-kube-api-access-wpldx\") pod \"machine-config-daemon-nldtn\" (UID: \"1d50e5c9-7ce9-40c0-b942-01031654d27c\") " pod="openshift-machine-config-operator/machine-config-daemon-nldtn" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.097153 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.117243 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pc5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pc5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.135622 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d50e5c9-7ce9-40c0-b942-01031654d27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nldtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.153088 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-pc5tw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.163938 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.172167 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-gmfps" Dec 15 05:37:35 crc kubenswrapper[4747]: W1215 05:37:35.172943 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d50e5c9_7ce9_40c0_b942_01031654d27c.slice/crio-f3b89d7653c40ab7f9e2f4146c22afed0d23c8883adc86e75947c5ba936a8cf0 WatchSource:0}: Error finding container f3b89d7653c40ab7f9e2f4146c22afed0d23c8883adc86e75947c5ba936a8cf0: Status 404 returned error can't find the container with id f3b89d7653c40ab7f9e2f4146c22afed0d23c8883adc86e75947c5ba936a8cf0 Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.187503 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-82lhw"] Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.188402 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.189789 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.190050 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.190171 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.190379 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.190596 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.191964 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.192136 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 15 05:37:35 crc kubenswrapper[4747]: W1215 05:37:35.195895 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89350c5d_9a77_499e_81ec_376b012cc219.slice/crio-8d222d21222f2f3bcfa1ffa65a903af4811154e8303ac566ce8c9424771cd60f WatchSource:0}: Error finding container 8d222d21222f2f3bcfa1ffa65a903af4811154e8303ac566ce8c9424771cd60f: Status 404 returned error can't find the container with id 8d222d21222f2f3bcfa1ffa65a903af4811154e8303ac566ce8c9424771cd60f Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.206455 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.232497 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.242111 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.251493 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.256176 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.256351 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-ovnkube-script-lib\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.256400 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-host-kubelet\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.256429 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-etc-openvswitch\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.256464 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-node-log\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.256491 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-ovnkube-config\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.256531 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-host-run-ovn-kubernetes\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.256560 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-host-run-netns\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.256585 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwzq6\" (UniqueName: \"kubernetes.io/projected/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-kube-api-access-zwzq6\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.256603 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-ovn-node-metrics-cert\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.256626 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-systemd-units\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.256648 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-run-openvswitch\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.256683 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-run-ovn\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.256704 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-log-socket\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.256734 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-host-slash\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.256754 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-run-systemd\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.256773 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-host-cni-netd\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.256800 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-env-overrides\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.256832 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-host-cni-bin\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.256850 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-var-lib-openvswitch\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.256876 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: E1215 05:37:35.257010 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-15 05:37:36.256988145 +0000 UTC m=+19.953500062 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.268093 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pc5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pc5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.278729 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d50e5c9-7ce9-40c0-b942-01031654d27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nldtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.296290 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-82lhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.312354 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31db1d28-81c8-4eae-989c-49168bd4e711\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675507d9779ef73f4e25201102562a79a43654bce4a6e3b363f03e0e0b697578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.323576 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.336182 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.357908 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwzq6\" (UniqueName: \"kubernetes.io/projected/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-kube-api-access-zwzq6\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.357972 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-ovn-node-metrics-cert\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.357990 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-run-openvswitch\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.358008 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-systemd-units\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.358027 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.358042 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-log-socket\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.358060 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-run-ovn\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.358077 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.358093 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-run-systemd\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.358107 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-host-cni-netd\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.358121 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-env-overrides\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.358135 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-host-slash\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.358153 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-host-cni-bin\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.358171 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-var-lib-openvswitch\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.358187 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.358204 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-host-kubelet\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.358218 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-etc-openvswitch\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.358231 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-node-log\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.358244 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-ovnkube-script-lib\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.358259 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.358273 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-ovnkube-config\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.358289 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-host-run-ovn-kubernetes\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.358306 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.358319 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-host-run-netns\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.358372 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-host-run-netns\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.359096 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-host-cni-bin\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.359133 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-run-openvswitch\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.359155 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-systemd-units\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: E1215 05:37:35.359194 4747 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 15 05:37:35 crc kubenswrapper[4747]: E1215 05:37:35.359223 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-15 05:37:36.359212238 +0000 UTC m=+20.055724155 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.359244 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-log-socket\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.359266 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-run-ovn\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: E1215 05:37:35.359315 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 15 05:37:35 crc kubenswrapper[4747]: E1215 05:37:35.359330 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 15 05:37:35 crc kubenswrapper[4747]: E1215 05:37:35.359343 4747 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 15 05:37:35 crc kubenswrapper[4747]: E1215 05:37:35.359363 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-15 05:37:36.35935703 +0000 UTC m=+20.055868948 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.359386 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-run-systemd\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.359409 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-host-cni-netd\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.359780 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-env-overrides\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.359829 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-host-slash\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.360053 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-node-log\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.360084 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-var-lib-openvswitch\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.360135 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.360159 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-host-kubelet\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.360181 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-etc-openvswitch\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.360204 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-host-run-ovn-kubernetes\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: E1215 05:37:35.360245 4747 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 15 05:37:35 crc kubenswrapper[4747]: E1215 05:37:35.360270 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-15 05:37:36.360263015 +0000 UTC m=+20.056774932 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 15 05:37:35 crc kubenswrapper[4747]: E1215 05:37:35.360439 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 15 05:37:35 crc kubenswrapper[4747]: E1215 05:37:35.360471 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 15 05:37:35 crc kubenswrapper[4747]: E1215 05:37:35.360486 4747 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 15 05:37:35 crc kubenswrapper[4747]: E1215 05:37:35.360545 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-15 05:37:36.360527953 +0000 UTC m=+20.057039870 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.360657 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-ovnkube-config\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.360816 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-ovnkube-script-lib\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.363300 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-ovn-node-metrics-cert\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.370954 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cltgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3c83e90-bb8c-4909-9633-8f59ca12db6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg2m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cltgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.397491 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwzq6\" (UniqueName: \"kubernetes.io/projected/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-kube-api-access-zwzq6\") pod \"ovnkube-node-82lhw\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.430393 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p2w9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abdca76-2fcd-44fc-a09d-ded3084306d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fw7nq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p2w9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.474727 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmfps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89350c5d-9a77-499e-81ec-376b012cc219\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbpgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmfps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:35Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.498117 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:35 crc kubenswrapper[4747]: W1215 05:37:35.533455 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b2ee692_1e9a_49c0_b2f0_dfed89ebf7b7.slice/crio-2251cfb228a94a76c54c7da530d41c6fd089ff40571247c2fd71b84610388940 WatchSource:0}: Error finding container 2251cfb228a94a76c54c7da530d41c6fd089ff40571247c2fd71b84610388940: Status 404 returned error can't find the container with id 2251cfb228a94a76c54c7da530d41c6fd089ff40571247c2fd71b84610388940 Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.569264 4747 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.569336 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.718020 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-cltgw" event={"ID":"b3c83e90-bb8c-4909-9633-8f59ca12db6f","Type":"ContainerStarted","Data":"c5babf7321502d73aabf3a859529005a01ff938a5bdb4032d6eee6ace84be575"} Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.718107 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-cltgw" event={"ID":"b3c83e90-bb8c-4909-9633-8f59ca12db6f","Type":"ContainerStarted","Data":"0a06ee1279907ad5939f55d3e0d22e6c142f5b9298994b152844c2fd41ebadee"} Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.721481 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-p2w9d" event={"ID":"1abdca76-2fcd-44fc-a09d-ded3084306d7","Type":"ContainerStarted","Data":"e3e6c91da690038d3b65310ea18371acab81ebb6542f2072f4ac4e08f74f04de"} Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.721540 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-p2w9d" event={"ID":"1abdca76-2fcd-44fc-a09d-ded3084306d7","Type":"ContainerStarted","Data":"590fcbf21033ed690ac1169a34da92ffa99dba028968efdcb7d200ac2c6d6b43"} Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.723271 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"a08152a300aed83d9303e53f223312967a87db2f46a820cd21c0e6024e604217"} Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.724848 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gmfps" event={"ID":"89350c5d-9a77-499e-81ec-376b012cc219","Type":"ContainerStarted","Data":"31dada17d2bfd9fb0e40e0e67d3d41379a6dabe9f1a7db2b2367a66d120d7e0d"} Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.724885 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gmfps" event={"ID":"89350c5d-9a77-499e-81ec-376b012cc219","Type":"ContainerStarted","Data":"8d222d21222f2f3bcfa1ffa65a903af4811154e8303ac566ce8c9424771cd60f"} Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.726954 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" event={"ID":"1d50e5c9-7ce9-40c0-b942-01031654d27c","Type":"ContainerStarted","Data":"7c3d9fff925cadcb3478997fe799c7fde5e75f670ad1309f5c6b15c3c534b673"} Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.727009 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" event={"ID":"1d50e5c9-7ce9-40c0-b942-01031654d27c","Type":"ContainerStarted","Data":"d8e211d6177b24044383a0cc22cceb80f6442489a5d2b4adbafc3d36637b3a96"} Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.727022 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" event={"ID":"1d50e5c9-7ce9-40c0-b942-01031654d27c","Type":"ContainerStarted","Data":"f3b89d7653c40ab7f9e2f4146c22afed0d23c8883adc86e75947c5ba936a8cf0"} Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.728395 4747 generic.go:334] "Generic (PLEG): container finished" podID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerID="9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec" exitCode=0 Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.728458 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" event={"ID":"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7","Type":"ContainerDied","Data":"9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec"} Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.728477 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" event={"ID":"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7","Type":"ContainerStarted","Data":"2251cfb228a94a76c54c7da530d41c6fd089ff40571247c2fd71b84610388940"} Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.730555 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"3f8b29ca57cdee8b65a3f3a0536e36f0e7e4666850221e623ca0f8186fd18e00"} Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.730592 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"93b95aa58fb2a867f9fdab9a0b113fb36bf793ec4f5c1129205127769faed29c"} Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.731413 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31db1d28-81c8-4eae-989c-49168bd4e711\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675507d9779ef73f4e25201102562a79a43654bce4a6e3b363f03e0e0b697578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:35Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.732129 4747 generic.go:334] "Generic (PLEG): container finished" podID="0b19a93a-5d3a-44c6-b207-8e4ee3be6c20" containerID="fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273" exitCode=0 Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.732188 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pc5tw" event={"ID":"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20","Type":"ContainerDied","Data":"fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273"} Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.732225 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pc5tw" event={"ID":"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20","Type":"ContainerStarted","Data":"847a6e3490b94fb4a9a93b55d209e61968d9b7f5b5b1df401b3b4be8430fb32f"} Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.735715 4747 scope.go:117] "RemoveContainer" containerID="675507d9779ef73f4e25201102562a79a43654bce4a6e3b363f03e0e0b697578" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.736032 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"582ccb05087bcbafc98ab0ee2114e9045b2ff301e10ba99cf8ddcd285cae9d64"} Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.736064 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a10d569d887d2d91a5cb36b7193d4bfecf6a3177e238396b8863806aacf9df19"} Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.736077 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"94d00caf0b15d5c338f03bbcaef6ccdd8df90a836d7b6b17db688e9f2613f3c7"} Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.744019 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:35Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.754252 4747 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.754281 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:35Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.764955 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cltgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3c83e90-bb8c-4909-9633-8f59ca12db6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5babf7321502d73aabf3a859529005a01ff938a5bdb4032d6eee6ace84be575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg2m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cltgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:35Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.776772 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p2w9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abdca76-2fcd-44fc-a09d-ded3084306d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fw7nq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p2w9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:35Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.785719 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmfps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89350c5d-9a77-499e-81ec-376b012cc219\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbpgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmfps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:35Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.798895 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:35Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.808195 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:35Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.835697 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-82lhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:35Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.875391 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:35Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.912712 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:35Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.954291 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pc5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pc5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:35Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:35 crc kubenswrapper[4747]: I1215 05:37:35.992154 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d50e5c9-7ce9-40c0-b942-01031654d27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nldtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:35Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.058311 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:36Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.080206 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:36Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.127988 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-82lhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:36Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.155995 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:36Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.194323 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:36Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.234393 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pc5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pc5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:36Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.268511 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 15 05:37:36 crc kubenswrapper[4747]: E1215 05:37:36.268730 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-15 05:37:38.268708082 +0000 UTC m=+21.965219999 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.280158 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d50e5c9-7ce9-40c0-b942-01031654d27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3d9fff925cadcb3478997fe799c7fde5e75f670ad1309f5c6b15c3c534b673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8e211d6177b24044383a0cc22cceb80f6442489a5d2b4adbafc3d36637b3a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nldtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:36Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.314535 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31db1d28-81c8-4eae-989c-49168bd4e711\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675507d9779ef73f4e25201102562a79a43654bce4a6e3b363f03e0e0b697578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://675507d9779ef73f4e25201102562a79a43654bce4a6e3b363f03e0e0b697578\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-15T05:37:33Z\\\",\\\"message\\\":\\\"le observer\\\\nW1215 05:37:33.642096 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1215 05:37:33.642525 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1215 05:37:33.645063 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1979444596/tls.crt::/tmp/serving-cert-1979444596/tls.key\\\\\\\"\\\\nI1215 05:37:33.801352 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1215 05:37:33.804097 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1215 05:37:33.804116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1215 05:37:33.804137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1215 05:37:33.804142 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1215 05:37:33.809816 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1215 05:37:33.809843 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1215 05:37:33.809857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1215 05:37:33.809860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1215 05:37:33.809862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1215 05:37:33.810091 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1215 05:37:33.812167 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:36Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.358895 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8b29ca57cdee8b65a3f3a0536e36f0e7e4666850221e623ca0f8186fd18e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:36Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.369467 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.369512 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.369540 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.369562 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:37:36 crc kubenswrapper[4747]: E1215 05:37:36.369686 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 15 05:37:36 crc kubenswrapper[4747]: E1215 05:37:36.369705 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 15 05:37:36 crc kubenswrapper[4747]: E1215 05:37:36.369716 4747 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 15 05:37:36 crc kubenswrapper[4747]: E1215 05:37:36.369758 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-15 05:37:38.369745364 +0000 UTC m=+22.066257271 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 15 05:37:36 crc kubenswrapper[4747]: E1215 05:37:36.370026 4747 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 15 05:37:36 crc kubenswrapper[4747]: E1215 05:37:36.370085 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-15 05:37:38.370071177 +0000 UTC m=+22.066583094 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 15 05:37:36 crc kubenswrapper[4747]: E1215 05:37:36.370097 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 15 05:37:36 crc kubenswrapper[4747]: E1215 05:37:36.370110 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 15 05:37:36 crc kubenswrapper[4747]: E1215 05:37:36.370104 4747 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 15 05:37:36 crc kubenswrapper[4747]: E1215 05:37:36.370211 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-15 05:37:38.370185381 +0000 UTC m=+22.066697299 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 15 05:37:36 crc kubenswrapper[4747]: E1215 05:37:36.370120 4747 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 15 05:37:36 crc kubenswrapper[4747]: E1215 05:37:36.370283 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-15 05:37:38.370270402 +0000 UTC m=+22.066782319 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.395856 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://582ccb05087bcbafc98ab0ee2114e9045b2ff301e10ba99cf8ddcd285cae9d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a10d569d887d2d91a5cb36b7193d4bfecf6a3177e238396b8863806aacf9df19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:36Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.435121 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cltgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3c83e90-bb8c-4909-9633-8f59ca12db6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5babf7321502d73aabf3a859529005a01ff938a5bdb4032d6eee6ace84be575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg2m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cltgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:36Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.471440 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p2w9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abdca76-2fcd-44fc-a09d-ded3084306d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e6c91da690038d3b65310ea18371acab81ebb6542f2072f4ac4e08f74f04de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fw7nq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p2w9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:36Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.509973 4747 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.510732 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmfps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89350c5d-9a77-499e-81ec-376b012cc219\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31dada17d2bfd9fb0e40e0e67d3d41379a6dabe9f1a7db2b2367a66d120d7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbpgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmfps\": Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/pods/multus-gmfps/status\": read tcp 192.168.25.116:37400->192.168.25.116:6443: use of closed network connection" Dec 15 05:37:36 crc kubenswrapper[4747]: E1215 05:37:36.510847 4747 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/events\": read tcp 192.168.25.116:37400->192.168.25.116:6443: use of closed network connection" event="&Event{ObjectMeta:{ovnkube-node-82lhw.18814ce09d59f02f openshift-ovn-kubernetes 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-ovn-kubernetes,Name:ovnkube-node-82lhw,UID:2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7,APIVersion:v1,ResourceVersion:26698,FieldPath:spec.containers{northd},},Reason:Created,Message:Created container northd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-15 05:37:36.502247471 +0000 UTC m=+20.198759388,LastTimestamp:2025-12-15 05:37:36.502247471 +0000 UTC m=+20.198759388,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.628776 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.628792 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:37:36 crc kubenswrapper[4747]: E1215 05:37:36.629143 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.628859 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:37:36 crc kubenswrapper[4747]: E1215 05:37:36.629340 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:37:36 crc kubenswrapper[4747]: E1215 05:37:36.629219 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.632999 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.633685 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.634378 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.635006 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.635595 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.636132 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.636694 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.637292 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.637953 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.638430 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.638961 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.639582 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.640124 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.640598 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.641406 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:36Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.643131 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.643690 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.644353 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.644773 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.645351 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.645903 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.646413 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.646968 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.647391 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.648032 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.648452 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.649044 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.649598 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.651274 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.651873 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:36Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.652338 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.652919 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.653448 4747 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.653558 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.655039 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.655583 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.656069 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.657222 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.657866 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.658425 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.661315 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.661783 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:36Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.662173 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.662675 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.663312 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.663973 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.664572 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.665056 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.665590 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.666176 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.666854 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.667368 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.667873 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.668376 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.668902 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.669492 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.672120 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.675271 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pc5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pc5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:36Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.712879 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d50e5c9-7ce9-40c0-b942-01031654d27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3d9fff925cadcb3478997fe799c7fde5e75f670ad1309f5c6b15c3c534b673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8e211d6177b24044383a0cc22cceb80f6442489a5d2b4adbafc3d36637b3a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nldtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:36Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.739117 4747 generic.go:334] "Generic (PLEG): container finished" podID="0b19a93a-5d3a-44c6-b207-8e4ee3be6c20" containerID="bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba" exitCode=0 Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.739161 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pc5tw" event={"ID":"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20","Type":"ContainerDied","Data":"bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba"} Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.749857 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.752130 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7c6f0ff77c32912e767410211079dffc4e0681cf67e040e2cff227825813908d"} Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.752677 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.755975 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" event={"ID":"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7","Type":"ContainerStarted","Data":"fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928"} Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.756011 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" event={"ID":"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7","Type":"ContainerStarted","Data":"cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170"} Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.756026 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" event={"ID":"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7","Type":"ContainerStarted","Data":"dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b"} Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.756040 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" event={"ID":"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7","Type":"ContainerStarted","Data":"75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b"} Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.756050 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" event={"ID":"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7","Type":"ContainerStarted","Data":"d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75"} Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.756060 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" event={"ID":"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7","Type":"ContainerStarted","Data":"34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8"} Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.761004 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-82lhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:36Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.794708 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8b29ca57cdee8b65a3f3a0536e36f0e7e4666850221e623ca0f8186fd18e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:36Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.839195 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://582ccb05087bcbafc98ab0ee2114e9045b2ff301e10ba99cf8ddcd285cae9d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a10d569d887d2d91a5cb36b7193d4bfecf6a3177e238396b8863806aacf9df19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:36Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.840325 4747 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.846639 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.846674 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.846685 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.846831 4747 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.891840 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cltgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3c83e90-bb8c-4909-9633-8f59ca12db6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5babf7321502d73aabf3a859529005a01ff938a5bdb4032d6eee6ace84be575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg2m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cltgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:36Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.907542 4747 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.907875 4747 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.908950 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.908981 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.908991 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.909009 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.909027 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:36Z","lastTransitionTime":"2025-12-15T05:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:36 crc kubenswrapper[4747]: E1215 05:37:36.925425 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4ef03cda-5cb6-4966-bf0f-23d213ae8ebc\\\",\\\"systemUUID\\\":\\\"f6e6c4c5-517c-43b9-abbe-241c399d7f32\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:36Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.928458 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.928516 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.928529 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.928551 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.928564 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:36Z","lastTransitionTime":"2025-12-15T05:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:36 crc kubenswrapper[4747]: E1215 05:37:36.938363 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4ef03cda-5cb6-4966-bf0f-23d213ae8ebc\\\",\\\"systemUUID\\\":\\\"f6e6c4c5-517c-43b9-abbe-241c399d7f32\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:36Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.941687 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.941783 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.941882 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.941980 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.942041 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:36Z","lastTransitionTime":"2025-12-15T05:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:36 crc kubenswrapper[4747]: E1215 05:37:36.951328 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4ef03cda-5cb6-4966-bf0f-23d213ae8ebc\\\",\\\"systemUUID\\\":\\\"f6e6c4c5-517c-43b9-abbe-241c399d7f32\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:36Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.952894 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p2w9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abdca76-2fcd-44fc-a09d-ded3084306d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e6c91da690038d3b65310ea18371acab81ebb6542f2072f4ac4e08f74f04de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fw7nq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p2w9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:36Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.953861 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.953894 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.953906 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.953918 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.953946 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:36Z","lastTransitionTime":"2025-12-15T05:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:36 crc kubenswrapper[4747]: E1215 05:37:36.961875 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4ef03cda-5cb6-4966-bf0f-23d213ae8ebc\\\",\\\"systemUUID\\\":\\\"f6e6c4c5-517c-43b9-abbe-241c399d7f32\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:36Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.964723 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.964758 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.964770 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.964783 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.964806 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:36Z","lastTransitionTime":"2025-12-15T05:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:36 crc kubenswrapper[4747]: E1215 05:37:36.973219 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4ef03cda-5cb6-4966-bf0f-23d213ae8ebc\\\",\\\"systemUUID\\\":\\\"f6e6c4c5-517c-43b9-abbe-241c399d7f32\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:36Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:36 crc kubenswrapper[4747]: E1215 05:37:36.973323 4747 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.974368 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.974446 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.974514 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.974577 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.974649 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:36Z","lastTransitionTime":"2025-12-15T05:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:36 crc kubenswrapper[4747]: I1215 05:37:36.993216 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmfps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89350c5d-9a77-499e-81ec-376b012cc219\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31dada17d2bfd9fb0e40e0e67d3d41379a6dabe9f1a7db2b2367a66d120d7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbpgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmfps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:36Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.033287 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31db1d28-81c8-4eae-989c-49168bd4e711\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://675507d9779ef73f4e25201102562a79a43654bce4a6e3b363f03e0e0b697578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://675507d9779ef73f4e25201102562a79a43654bce4a6e3b363f03e0e0b697578\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-15T05:37:33Z\\\",\\\"message\\\":\\\"le observer\\\\nW1215 05:37:33.642096 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1215 05:37:33.642525 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1215 05:37:33.645063 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1979444596/tls.crt::/tmp/serving-cert-1979444596/tls.key\\\\\\\"\\\\nI1215 05:37:33.801352 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1215 05:37:33.804097 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1215 05:37:33.804116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1215 05:37:33.804137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1215 05:37:33.804142 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1215 05:37:33.809816 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1215 05:37:33.809843 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1215 05:37:33.809857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1215 05:37:33.809860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1215 05:37:33.809862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1215 05:37:33.810091 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1215 05:37:33.812167 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:37Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.072107 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:37Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.077154 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.077203 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.077215 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.077232 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.077244 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:37Z","lastTransitionTime":"2025-12-15T05:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.121538 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:37Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.153613 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:37Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.179453 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.179496 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.179509 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.179526 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.179537 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:37Z","lastTransitionTime":"2025-12-15T05:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.193387 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:37Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.234485 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:37Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.274585 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pc5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pc5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:37Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.281544 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.281592 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.281608 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.281629 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.281643 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:37Z","lastTransitionTime":"2025-12-15T05:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.313118 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d50e5c9-7ce9-40c0-b942-01031654d27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3d9fff925cadcb3478997fe799c7fde5e75f670ad1309f5c6b15c3c534b673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8e211d6177b24044383a0cc22cceb80f6442489a5d2b4adbafc3d36637b3a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nldtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:37Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.358228 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-82lhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:37Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.383760 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.383805 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.383816 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.383835 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.383844 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:37Z","lastTransitionTime":"2025-12-15T05:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.394145 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8b29ca57cdee8b65a3f3a0536e36f0e7e4666850221e623ca0f8186fd18e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:37Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.433141 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://582ccb05087bcbafc98ab0ee2114e9045b2ff301e10ba99cf8ddcd285cae9d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a10d569d887d2d91a5cb36b7193d4bfecf6a3177e238396b8863806aacf9df19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:37Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.471658 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cltgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3c83e90-bb8c-4909-9633-8f59ca12db6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5babf7321502d73aabf3a859529005a01ff938a5bdb4032d6eee6ace84be575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg2m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cltgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:37Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.486578 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.486615 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.486627 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.486646 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.486661 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:37Z","lastTransitionTime":"2025-12-15T05:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.512840 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p2w9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abdca76-2fcd-44fc-a09d-ded3084306d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e6c91da690038d3b65310ea18371acab81ebb6542f2072f4ac4e08f74f04de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fw7nq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p2w9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:37Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.554340 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmfps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89350c5d-9a77-499e-81ec-376b012cc219\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31dada17d2bfd9fb0e40e0e67d3d41379a6dabe9f1a7db2b2367a66d120d7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbpgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmfps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:37Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.589376 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.589438 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.589450 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.589468 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.589486 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:37Z","lastTransitionTime":"2025-12-15T05:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.595028 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31db1d28-81c8-4eae-989c-49168bd4e711\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f0ff77c32912e767410211079dffc4e0681cf67e040e2cff227825813908d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://675507d9779ef73f4e25201102562a79a43654bce4a6e3b363f03e0e0b697578\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-15T05:37:33Z\\\",\\\"message\\\":\\\"le observer\\\\nW1215 05:37:33.642096 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1215 05:37:33.642525 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1215 05:37:33.645063 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1979444596/tls.crt::/tmp/serving-cert-1979444596/tls.key\\\\\\\"\\\\nI1215 05:37:33.801352 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1215 05:37:33.804097 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1215 05:37:33.804116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1215 05:37:33.804137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1215 05:37:33.804142 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1215 05:37:33.809816 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1215 05:37:33.809843 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1215 05:37:33.809857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1215 05:37:33.809860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1215 05:37:33.809862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1215 05:37:33.810091 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1215 05:37:33.812167 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:37Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.692282 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.692333 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.692343 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.692361 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.692374 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:37Z","lastTransitionTime":"2025-12-15T05:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.760287 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"5a223bb34540a7bc57028079a1b97da25865265d93931192b6be84a5625714aa"} Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.762545 4747 generic.go:334] "Generic (PLEG): container finished" podID="0b19a93a-5d3a-44c6-b207-8e4ee3be6c20" containerID="498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e" exitCode=0 Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.762638 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pc5tw" event={"ID":"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20","Type":"ContainerDied","Data":"498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e"} Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.773326 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://582ccb05087bcbafc98ab0ee2114e9045b2ff301e10ba99cf8ddcd285cae9d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a10d569d887d2d91a5cb36b7193d4bfecf6a3177e238396b8863806aacf9df19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:37Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.784102 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cltgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3c83e90-bb8c-4909-9633-8f59ca12db6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5babf7321502d73aabf3a859529005a01ff938a5bdb4032d6eee6ace84be575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg2m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cltgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:37Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.794337 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.794373 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.794384 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.794403 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.794414 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:37Z","lastTransitionTime":"2025-12-15T05:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.798488 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p2w9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abdca76-2fcd-44fc-a09d-ded3084306d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e6c91da690038d3b65310ea18371acab81ebb6542f2072f4ac4e08f74f04de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fw7nq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p2w9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:37Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.814197 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmfps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89350c5d-9a77-499e-81ec-376b012cc219\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31dada17d2bfd9fb0e40e0e67d3d41379a6dabe9f1a7db2b2367a66d120d7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbpgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmfps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:37Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.826475 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31db1d28-81c8-4eae-989c-49168bd4e711\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f0ff77c32912e767410211079dffc4e0681cf67e040e2cff227825813908d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://675507d9779ef73f4e25201102562a79a43654bce4a6e3b363f03e0e0b697578\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-15T05:37:33Z\\\",\\\"message\\\":\\\"le observer\\\\nW1215 05:37:33.642096 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1215 05:37:33.642525 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1215 05:37:33.645063 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1979444596/tls.crt::/tmp/serving-cert-1979444596/tls.key\\\\\\\"\\\\nI1215 05:37:33.801352 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1215 05:37:33.804097 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1215 05:37:33.804116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1215 05:37:33.804137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1215 05:37:33.804142 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1215 05:37:33.809816 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1215 05:37:33.809843 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1215 05:37:33.809857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1215 05:37:33.809860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1215 05:37:33.809862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1215 05:37:33.810091 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1215 05:37:33.812167 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:37Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.836092 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8b29ca57cdee8b65a3f3a0536e36f0e7e4666850221e623ca0f8186fd18e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:37Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.872615 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:37Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.896608 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.896696 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.896710 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.896731 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.896763 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:37Z","lastTransitionTime":"2025-12-15T05:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.913867 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a223bb34540a7bc57028079a1b97da25865265d93931192b6be84a5625714aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:37Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.954550 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:37Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.993181 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:37Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.998994 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.999103 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.999187 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.999278 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:37 crc kubenswrapper[4747]: I1215 05:37:37.999339 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:37Z","lastTransitionTime":"2025-12-15T05:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.035604 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pc5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pc5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:38Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.072287 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d50e5c9-7ce9-40c0-b942-01031654d27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3d9fff925cadcb3478997fe799c7fde5e75f670ad1309f5c6b15c3c534b673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8e211d6177b24044383a0cc22cceb80f6442489a5d2b4adbafc3d36637b3a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nldtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:38Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.102051 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.102097 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.102109 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.102127 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.102140 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:38Z","lastTransitionTime":"2025-12-15T05:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.117139 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-82lhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:38Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.157257 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-82lhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:38Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.194675 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:38Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.204278 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.204309 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.204320 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.204336 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.204348 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:38Z","lastTransitionTime":"2025-12-15T05:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.232877 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:38Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.275895 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pc5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pc5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:38Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.287441 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 15 05:37:38 crc kubenswrapper[4747]: E1215 05:37:38.287601 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-15 05:37:42.287577553 +0000 UTC m=+25.984089469 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.306633 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.306681 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.306694 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.306709 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.306719 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:38Z","lastTransitionTime":"2025-12-15T05:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.311759 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d50e5c9-7ce9-40c0-b942-01031654d27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3d9fff925cadcb3478997fe799c7fde5e75f670ad1309f5c6b15c3c534b673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8e211d6177b24044383a0cc22cceb80f6442489a5d2b4adbafc3d36637b3a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nldtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:38Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.353895 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31db1d28-81c8-4eae-989c-49168bd4e711\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f0ff77c32912e767410211079dffc4e0681cf67e040e2cff227825813908d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://675507d9779ef73f4e25201102562a79a43654bce4a6e3b363f03e0e0b697578\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-15T05:37:33Z\\\",\\\"message\\\":\\\"le observer\\\\nW1215 05:37:33.642096 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1215 05:37:33.642525 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1215 05:37:33.645063 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1979444596/tls.crt::/tmp/serving-cert-1979444596/tls.key\\\\\\\"\\\\nI1215 05:37:33.801352 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1215 05:37:33.804097 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1215 05:37:33.804116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1215 05:37:33.804137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1215 05:37:33.804142 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1215 05:37:33.809816 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1215 05:37:33.809843 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1215 05:37:33.809857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1215 05:37:33.809860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1215 05:37:33.809862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1215 05:37:33.810091 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1215 05:37:33.812167 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:38Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.389043 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.389151 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.389232 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.389313 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:37:38 crc kubenswrapper[4747]: E1215 05:37:38.389321 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 15 05:37:38 crc kubenswrapper[4747]: E1215 05:37:38.389586 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 15 05:37:38 crc kubenswrapper[4747]: E1215 05:37:38.389652 4747 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 15 05:37:38 crc kubenswrapper[4747]: E1215 05:37:38.389753 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-15 05:37:42.389738748 +0000 UTC m=+26.086250665 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 15 05:37:38 crc kubenswrapper[4747]: E1215 05:37:38.389344 4747 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 15 05:37:38 crc kubenswrapper[4747]: E1215 05:37:38.389904 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-15 05:37:42.389894681 +0000 UTC m=+26.086406599 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 15 05:37:38 crc kubenswrapper[4747]: E1215 05:37:38.389398 4747 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 15 05:37:38 crc kubenswrapper[4747]: E1215 05:37:38.390072 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-15 05:37:42.39006397 +0000 UTC m=+26.086575887 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 15 05:37:38 crc kubenswrapper[4747]: E1215 05:37:38.389517 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 15 05:37:38 crc kubenswrapper[4747]: E1215 05:37:38.390165 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 15 05:37:38 crc kubenswrapper[4747]: E1215 05:37:38.390184 4747 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 15 05:37:38 crc kubenswrapper[4747]: E1215 05:37:38.390252 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-15 05:37:42.390230482 +0000 UTC m=+26.086742389 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.393551 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8b29ca57cdee8b65a3f3a0536e36f0e7e4666850221e623ca0f8186fd18e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:38Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.409199 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.409238 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.409253 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.409269 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.409300 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:38Z","lastTransitionTime":"2025-12-15T05:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.432817 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://582ccb05087bcbafc98ab0ee2114e9045b2ff301e10ba99cf8ddcd285cae9d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a10d569d887d2d91a5cb36b7193d4bfecf6a3177e238396b8863806aacf9df19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:38Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.470958 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cltgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3c83e90-bb8c-4909-9633-8f59ca12db6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5babf7321502d73aabf3a859529005a01ff938a5bdb4032d6eee6ace84be575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg2m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cltgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:38Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.510953 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.511150 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.511215 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.511293 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.511353 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:38Z","lastTransitionTime":"2025-12-15T05:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.511702 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p2w9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abdca76-2fcd-44fc-a09d-ded3084306d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e6c91da690038d3b65310ea18371acab81ebb6542f2072f4ac4e08f74f04de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fw7nq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p2w9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:38Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.553753 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmfps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89350c5d-9a77-499e-81ec-376b012cc219\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31dada17d2bfd9fb0e40e0e67d3d41379a6dabe9f1a7db2b2367a66d120d7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbpgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmfps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:38Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.592162 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:38Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.613008 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.613034 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.613044 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.613060 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.613079 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:38Z","lastTransitionTime":"2025-12-15T05:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.630187 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.630249 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:37:38 crc kubenswrapper[4747]: E1215 05:37:38.630316 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.630340 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:37:38 crc kubenswrapper[4747]: E1215 05:37:38.630457 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:37:38 crc kubenswrapper[4747]: E1215 05:37:38.630535 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.632901 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a223bb34540a7bc57028079a1b97da25865265d93931192b6be84a5625714aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:38Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.715962 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.716279 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.716292 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.716310 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.716322 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:38Z","lastTransitionTime":"2025-12-15T05:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.770173 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" event={"ID":"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7","Type":"ContainerStarted","Data":"d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674"} Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.772671 4747 generic.go:334] "Generic (PLEG): container finished" podID="0b19a93a-5d3a-44c6-b207-8e4ee3be6c20" containerID="6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162" exitCode=0 Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.772756 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pc5tw" event={"ID":"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20","Type":"ContainerDied","Data":"6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162"} Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.789998 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a223bb34540a7bc57028079a1b97da25865265d93931192b6be84a5625714aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:38Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.816803 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:38Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.818412 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.818446 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.818457 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.818472 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.818483 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:38Z","lastTransitionTime":"2025-12-15T05:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.829792 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:38Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.840461 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pc5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pc5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:38Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.848975 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d50e5c9-7ce9-40c0-b942-01031654d27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3d9fff925cadcb3478997fe799c7fde5e75f670ad1309f5c6b15c3c534b673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8e211d6177b24044383a0cc22cceb80f6442489a5d2b4adbafc3d36637b3a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nldtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:38Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.878774 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-82lhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:38Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.914330 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8b29ca57cdee8b65a3f3a0536e36f0e7e4666850221e623ca0f8186fd18e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:38Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.920209 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.920244 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.920259 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.920275 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.920289 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:38Z","lastTransitionTime":"2025-12-15T05:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.955229 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://582ccb05087bcbafc98ab0ee2114e9045b2ff301e10ba99cf8ddcd285cae9d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a10d569d887d2d91a5cb36b7193d4bfecf6a3177e238396b8863806aacf9df19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:38Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:38 crc kubenswrapper[4747]: I1215 05:37:38.993231 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cltgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3c83e90-bb8c-4909-9633-8f59ca12db6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5babf7321502d73aabf3a859529005a01ff938a5bdb4032d6eee6ace84be575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg2m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cltgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:38Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.022827 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.022918 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.022979 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.023020 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.023039 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:39Z","lastTransitionTime":"2025-12-15T05:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.033252 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p2w9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abdca76-2fcd-44fc-a09d-ded3084306d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e6c91da690038d3b65310ea18371acab81ebb6542f2072f4ac4e08f74f04de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fw7nq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p2w9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:39Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.059844 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.063259 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.075035 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmfps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89350c5d-9a77-499e-81ec-376b012cc219\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31dada17d2bfd9fb0e40e0e67d3d41379a6dabe9f1a7db2b2367a66d120d7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbpgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmfps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:39Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.093363 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.125136 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.125171 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.125183 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.125201 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.125217 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:39Z","lastTransitionTime":"2025-12-15T05:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.134556 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31db1d28-81c8-4eae-989c-49168bd4e711\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f0ff77c32912e767410211079dffc4e0681cf67e040e2cff227825813908d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://675507d9779ef73f4e25201102562a79a43654bce4a6e3b363f03e0e0b697578\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-15T05:37:33Z\\\",\\\"message\\\":\\\"le observer\\\\nW1215 05:37:33.642096 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1215 05:37:33.642525 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1215 05:37:33.645063 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1979444596/tls.crt::/tmp/serving-cert-1979444596/tls.key\\\\\\\"\\\\nI1215 05:37:33.801352 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1215 05:37:33.804097 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1215 05:37:33.804116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1215 05:37:33.804137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1215 05:37:33.804142 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1215 05:37:33.809816 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1215 05:37:33.809843 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1215 05:37:33.809857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1215 05:37:33.809860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1215 05:37:33.809862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1215 05:37:33.810091 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1215 05:37:33.812167 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:39Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.173583 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:39Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.212229 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a223bb34540a7bc57028079a1b97da25865265d93931192b6be84a5625714aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:39Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.227724 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.227760 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.227771 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.227788 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.227813 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:39Z","lastTransitionTime":"2025-12-15T05:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.255083 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:39Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.293201 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:39Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.330410 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.330458 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.330469 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.330488 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.330501 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:39Z","lastTransitionTime":"2025-12-15T05:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.334601 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pc5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pc5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:39Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.372419 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d50e5c9-7ce9-40c0-b942-01031654d27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3d9fff925cadcb3478997fe799c7fde5e75f670ad1309f5c6b15c3c534b673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8e211d6177b24044383a0cc22cceb80f6442489a5d2b4adbafc3d36637b3a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nldtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:39Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.417270 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-82lhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:39Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.432353 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.432386 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.432398 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.432416 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.432431 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:39Z","lastTransitionTime":"2025-12-15T05:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.454631 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31db1d28-81c8-4eae-989c-49168bd4e711\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f0ff77c32912e767410211079dffc4e0681cf67e040e2cff227825813908d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://675507d9779ef73f4e25201102562a79a43654bce4a6e3b363f03e0e0b697578\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-15T05:37:33Z\\\",\\\"message\\\":\\\"le observer\\\\nW1215 05:37:33.642096 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1215 05:37:33.642525 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1215 05:37:33.645063 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1979444596/tls.crt::/tmp/serving-cert-1979444596/tls.key\\\\\\\"\\\\nI1215 05:37:33.801352 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1215 05:37:33.804097 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1215 05:37:33.804116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1215 05:37:33.804137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1215 05:37:33.804142 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1215 05:37:33.809816 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1215 05:37:33.809843 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1215 05:37:33.809857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1215 05:37:33.809860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1215 05:37:33.809862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1215 05:37:33.810091 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1215 05:37:33.812167 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:39Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.493422 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8b29ca57cdee8b65a3f3a0536e36f0e7e4666850221e623ca0f8186fd18e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:39Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.533263 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://582ccb05087bcbafc98ab0ee2114e9045b2ff301e10ba99cf8ddcd285cae9d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a10d569d887d2d91a5cb36b7193d4bfecf6a3177e238396b8863806aacf9df19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:39Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.534172 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.534205 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.534215 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.534227 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.534236 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:39Z","lastTransitionTime":"2025-12-15T05:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.572102 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cltgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3c83e90-bb8c-4909-9633-8f59ca12db6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5babf7321502d73aabf3a859529005a01ff938a5bdb4032d6eee6ace84be575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg2m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cltgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:39Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.612733 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p2w9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abdca76-2fcd-44fc-a09d-ded3084306d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e6c91da690038d3b65310ea18371acab81ebb6542f2072f4ac4e08f74f04de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fw7nq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p2w9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:39Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.636295 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.636335 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.636346 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.636361 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.636372 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:39Z","lastTransitionTime":"2025-12-15T05:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.656127 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmfps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89350c5d-9a77-499e-81ec-376b012cc219\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31dada17d2bfd9fb0e40e0e67d3d41379a6dabe9f1a7db2b2367a66d120d7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbpgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmfps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:39Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.693616 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02b1ad69-88a4-40e6-a7c4-409b8d42f0cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317d78781624437d29c71b7ca4fa83e80bd20b5f05ab3ed6d1f7177abf0b77d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe7a2e5d3edede6651bef4bed5fb61df5e777cb2da607c135c9e529ad6d58fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://253522075c9e770e558c77f98b826cbb502f402d3806f640dc78d752baf574c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23a7828ef5239d7d64be1ee51a07c50e713808269c5ba2c94fde9e3a2470713a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:39Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.731498 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:39Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.737791 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.737832 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.737843 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.737858 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.737869 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:39Z","lastTransitionTime":"2025-12-15T05:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.778478 4747 generic.go:334] "Generic (PLEG): container finished" podID="0b19a93a-5d3a-44c6-b207-8e4ee3be6c20" containerID="9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278" exitCode=0 Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.778550 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pc5tw" event={"ID":"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20","Type":"ContainerDied","Data":"9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278"} Dec 15 05:37:39 crc kubenswrapper[4747]: E1215 05:37:39.785013 4747 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.794974 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a223bb34540a7bc57028079a1b97da25865265d93931192b6be84a5625714aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:39Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.833203 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:39Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.840182 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.840331 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.840353 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.840690 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.841171 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:39Z","lastTransitionTime":"2025-12-15T05:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.872777 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:39Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.913098 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pc5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pc5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:39Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.943196 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.943236 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.943247 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.943267 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.943278 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:39Z","lastTransitionTime":"2025-12-15T05:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.953786 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d50e5c9-7ce9-40c0-b942-01031654d27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3d9fff925cadcb3478997fe799c7fde5e75f670ad1309f5c6b15c3c534b673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8e211d6177b24044383a0cc22cceb80f6442489a5d2b4adbafc3d36637b3a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nldtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:39Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:39 crc kubenswrapper[4747]: I1215 05:37:39.997997 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-82lhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:39Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.033059 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://582ccb05087bcbafc98ab0ee2114e9045b2ff301e10ba99cf8ddcd285cae9d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a10d569d887d2d91a5cb36b7193d4bfecf6a3177e238396b8863806aacf9df19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:40Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.046054 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.046093 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.046108 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.046125 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.046137 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:40Z","lastTransitionTime":"2025-12-15T05:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.071276 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cltgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3c83e90-bb8c-4909-9633-8f59ca12db6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5babf7321502d73aabf3a859529005a01ff938a5bdb4032d6eee6ace84be575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg2m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cltgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:40Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.113134 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p2w9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abdca76-2fcd-44fc-a09d-ded3084306d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e6c91da690038d3b65310ea18371acab81ebb6542f2072f4ac4e08f74f04de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fw7nq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p2w9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:40Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.148151 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.148202 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.148212 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.148233 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.148246 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:40Z","lastTransitionTime":"2025-12-15T05:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.153601 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmfps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89350c5d-9a77-499e-81ec-376b012cc219\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31dada17d2bfd9fb0e40e0e67d3d41379a6dabe9f1a7db2b2367a66d120d7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbpgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmfps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:40Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.194402 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31db1d28-81c8-4eae-989c-49168bd4e711\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f0ff77c32912e767410211079dffc4e0681cf67e040e2cff227825813908d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://675507d9779ef73f4e25201102562a79a43654bce4a6e3b363f03e0e0b697578\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-15T05:37:33Z\\\",\\\"message\\\":\\\"le observer\\\\nW1215 05:37:33.642096 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1215 05:37:33.642525 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1215 05:37:33.645063 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1979444596/tls.crt::/tmp/serving-cert-1979444596/tls.key\\\\\\\"\\\\nI1215 05:37:33.801352 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1215 05:37:33.804097 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1215 05:37:33.804116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1215 05:37:33.804137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1215 05:37:33.804142 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1215 05:37:33.809816 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1215 05:37:33.809843 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1215 05:37:33.809857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1215 05:37:33.809860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1215 05:37:33.809862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1215 05:37:33.810091 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1215 05:37:33.812167 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:40Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.234275 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8b29ca57cdee8b65a3f3a0536e36f0e7e4666850221e623ca0f8186fd18e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:40Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.251293 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.251336 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.251350 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.251369 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.251381 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:40Z","lastTransitionTime":"2025-12-15T05:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.274034 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02b1ad69-88a4-40e6-a7c4-409b8d42f0cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317d78781624437d29c71b7ca4fa83e80bd20b5f05ab3ed6d1f7177abf0b77d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe7a2e5d3edede6651bef4bed5fb61df5e777cb2da607c135c9e529ad6d58fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://253522075c9e770e558c77f98b826cbb502f402d3806f640dc78d752baf574c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23a7828ef5239d7d64be1ee51a07c50e713808269c5ba2c94fde9e3a2470713a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:40Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.313920 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:40Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.353618 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.353658 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.353669 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.353684 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.353699 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:40Z","lastTransitionTime":"2025-12-15T05:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.456296 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.456325 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.456336 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.456351 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.456362 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:40Z","lastTransitionTime":"2025-12-15T05:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.558884 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.558960 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.558976 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.558993 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.559005 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:40Z","lastTransitionTime":"2025-12-15T05:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.628830 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.628850 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.628905 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:37:40 crc kubenswrapper[4747]: E1215 05:37:40.629546 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:37:40 crc kubenswrapper[4747]: E1215 05:37:40.629683 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:37:40 crc kubenswrapper[4747]: E1215 05:37:40.629689 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.661552 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.661591 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.661601 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.661616 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.661629 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:40Z","lastTransitionTime":"2025-12-15T05:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.763752 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.763906 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.763996 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.764091 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.764156 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:40Z","lastTransitionTime":"2025-12-15T05:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.785694 4747 generic.go:334] "Generic (PLEG): container finished" podID="0b19a93a-5d3a-44c6-b207-8e4ee3be6c20" containerID="64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218" exitCode=0 Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.785790 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pc5tw" event={"ID":"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20","Type":"ContainerDied","Data":"64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218"} Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.797861 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a223bb34540a7bc57028079a1b97da25865265d93931192b6be84a5625714aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:40Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.811373 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d50e5c9-7ce9-40c0-b942-01031654d27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3d9fff925cadcb3478997fe799c7fde5e75f670ad1309f5c6b15c3c534b673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8e211d6177b24044383a0cc22cceb80f6442489a5d2b4adbafc3d36637b3a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nldtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:40Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.827070 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-82lhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:40Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.837291 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:40Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.847865 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:40Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.859536 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pc5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pc5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:40Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.866882 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.866902 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.866912 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.866943 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.866957 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:40Z","lastTransitionTime":"2025-12-15T05:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.868372 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmfps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89350c5d-9a77-499e-81ec-376b012cc219\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31dada17d2bfd9fb0e40e0e67d3d41379a6dabe9f1a7db2b2367a66d120d7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbpgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmfps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:40Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.878598 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31db1d28-81c8-4eae-989c-49168bd4e711\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f0ff77c32912e767410211079dffc4e0681cf67e040e2cff227825813908d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://675507d9779ef73f4e25201102562a79a43654bce4a6e3b363f03e0e0b697578\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-15T05:37:33Z\\\",\\\"message\\\":\\\"le observer\\\\nW1215 05:37:33.642096 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1215 05:37:33.642525 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1215 05:37:33.645063 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1979444596/tls.crt::/tmp/serving-cert-1979444596/tls.key\\\\\\\"\\\\nI1215 05:37:33.801352 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1215 05:37:33.804097 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1215 05:37:33.804116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1215 05:37:33.804137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1215 05:37:33.804142 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1215 05:37:33.809816 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1215 05:37:33.809843 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1215 05:37:33.809857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1215 05:37:33.809860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1215 05:37:33.809862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1215 05:37:33.810091 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1215 05:37:33.812167 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:40Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.887804 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8b29ca57cdee8b65a3f3a0536e36f0e7e4666850221e623ca0f8186fd18e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:40Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.896451 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://582ccb05087bcbafc98ab0ee2114e9045b2ff301e10ba99cf8ddcd285cae9d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a10d569d887d2d91a5cb36b7193d4bfecf6a3177e238396b8863806aacf9df19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:40Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.903085 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cltgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3c83e90-bb8c-4909-9633-8f59ca12db6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5babf7321502d73aabf3a859529005a01ff938a5bdb4032d6eee6ace84be575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg2m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cltgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:40Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.913857 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p2w9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abdca76-2fcd-44fc-a09d-ded3084306d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e6c91da690038d3b65310ea18371acab81ebb6542f2072f4ac4e08f74f04de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fw7nq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p2w9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:40Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.923321 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02b1ad69-88a4-40e6-a7c4-409b8d42f0cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317d78781624437d29c71b7ca4fa83e80bd20b5f05ab3ed6d1f7177abf0b77d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe7a2e5d3edede6651bef4bed5fb61df5e777cb2da607c135c9e529ad6d58fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://253522075c9e770e558c77f98b826cbb502f402d3806f640dc78d752baf574c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23a7828ef5239d7d64be1ee51a07c50e713808269c5ba2c94fde9e3a2470713a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:40Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.931557 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:40Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.970276 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.970308 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.970317 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.970334 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:40 crc kubenswrapper[4747]: I1215 05:37:40.970346 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:40Z","lastTransitionTime":"2025-12-15T05:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.072416 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.072452 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.072466 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.072504 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.072515 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:41Z","lastTransitionTime":"2025-12-15T05:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.174501 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.174530 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.174540 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.174551 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.174560 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:41Z","lastTransitionTime":"2025-12-15T05:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.277082 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.277128 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.277139 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.277156 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.277165 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:41Z","lastTransitionTime":"2025-12-15T05:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.379307 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.379360 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.379370 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.379391 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.379406 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:41Z","lastTransitionTime":"2025-12-15T05:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.481572 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.481619 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.481632 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.481649 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.481659 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:41Z","lastTransitionTime":"2025-12-15T05:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.583617 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.583670 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.583680 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.583702 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.583729 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:41Z","lastTransitionTime":"2025-12-15T05:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.687139 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.687173 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.687187 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.687202 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.687211 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:41Z","lastTransitionTime":"2025-12-15T05:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.789024 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.789064 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.789074 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.789088 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.789101 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:41Z","lastTransitionTime":"2025-12-15T05:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.792166 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pc5tw" event={"ID":"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20","Type":"ContainerStarted","Data":"39f3b6ddce69ca768ed97f2305771f13fdcc0608fd58b91ee8f9c000e6786af8"} Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.796387 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" event={"ID":"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7","Type":"ContainerStarted","Data":"9fd98281708fa3adc857d8e86266ea930266a36e53a6df1beda1674927e2b7c7"} Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.796625 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.796649 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.807109 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a223bb34540a7bc57028079a1b97da25865265d93931192b6be84a5625714aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:41Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.820591 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-82lhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:41Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.820890 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.821179 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.830101 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:41Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.838465 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:41Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.848094 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pc5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39f3b6ddce69ca768ed97f2305771f13fdcc0608fd58b91ee8f9c000e6786af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pc5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:41Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.855908 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d50e5c9-7ce9-40c0-b942-01031654d27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3d9fff925cadcb3478997fe799c7fde5e75f670ad1309f5c6b15c3c534b673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8e211d6177b24044383a0cc22cceb80f6442489a5d2b4adbafc3d36637b3a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nldtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:41Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.866602 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31db1d28-81c8-4eae-989c-49168bd4e711\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f0ff77c32912e767410211079dffc4e0681cf67e040e2cff227825813908d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://675507d9779ef73f4e25201102562a79a43654bce4a6e3b363f03e0e0b697578\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-15T05:37:33Z\\\",\\\"message\\\":\\\"le observer\\\\nW1215 05:37:33.642096 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1215 05:37:33.642525 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1215 05:37:33.645063 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1979444596/tls.crt::/tmp/serving-cert-1979444596/tls.key\\\\\\\"\\\\nI1215 05:37:33.801352 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1215 05:37:33.804097 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1215 05:37:33.804116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1215 05:37:33.804137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1215 05:37:33.804142 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1215 05:37:33.809816 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1215 05:37:33.809843 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1215 05:37:33.809857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1215 05:37:33.809860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1215 05:37:33.809862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1215 05:37:33.810091 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1215 05:37:33.812167 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:41Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.874374 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8b29ca57cdee8b65a3f3a0536e36f0e7e4666850221e623ca0f8186fd18e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:41Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.882861 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://582ccb05087bcbafc98ab0ee2114e9045b2ff301e10ba99cf8ddcd285cae9d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a10d569d887d2d91a5cb36b7193d4bfecf6a3177e238396b8863806aacf9df19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:41Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.889429 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cltgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3c83e90-bb8c-4909-9633-8f59ca12db6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5babf7321502d73aabf3a859529005a01ff938a5bdb4032d6eee6ace84be575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg2m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cltgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:41Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.890475 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.890498 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.890508 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.890524 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.890535 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:41Z","lastTransitionTime":"2025-12-15T05:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.896256 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p2w9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abdca76-2fcd-44fc-a09d-ded3084306d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e6c91da690038d3b65310ea18371acab81ebb6542f2072f4ac4e08f74f04de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fw7nq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p2w9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:41Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.904868 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmfps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89350c5d-9a77-499e-81ec-376b012cc219\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31dada17d2bfd9fb0e40e0e67d3d41379a6dabe9f1a7db2b2367a66d120d7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbpgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmfps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:41Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.915485 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02b1ad69-88a4-40e6-a7c4-409b8d42f0cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317d78781624437d29c71b7ca4fa83e80bd20b5f05ab3ed6d1f7177abf0b77d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe7a2e5d3edede6651bef4bed5fb61df5e777cb2da607c135c9e529ad6d58fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://253522075c9e770e558c77f98b826cbb502f402d3806f640dc78d752baf574c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23a7828ef5239d7d64be1ee51a07c50e713808269c5ba2c94fde9e3a2470713a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:41Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.924634 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:41Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.935388 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:41Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.945577 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pc5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39f3b6ddce69ca768ed97f2305771f13fdcc0608fd58b91ee8f9c000e6786af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pc5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:41Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.954058 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d50e5c9-7ce9-40c0-b942-01031654d27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3d9fff925cadcb3478997fe799c7fde5e75f670ad1309f5c6b15c3c534b673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8e211d6177b24044383a0cc22cceb80f6442489a5d2b4adbafc3d36637b3a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nldtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:41Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.966434 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fd98281708fa3adc857d8e86266ea930266a36e53a6df1beda1674927e2b7c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-82lhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:41Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.975252 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:41Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.982005 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cltgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3c83e90-bb8c-4909-9633-8f59ca12db6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5babf7321502d73aabf3a859529005a01ff938a5bdb4032d6eee6ace84be575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg2m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cltgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:41Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.989467 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p2w9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abdca76-2fcd-44fc-a09d-ded3084306d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e6c91da690038d3b65310ea18371acab81ebb6542f2072f4ac4e08f74f04de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fw7nq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p2w9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:41Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.992619 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.992650 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.992662 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.992676 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.992688 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:41Z","lastTransitionTime":"2025-12-15T05:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:41 crc kubenswrapper[4747]: I1215 05:37:41.997988 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmfps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89350c5d-9a77-499e-81ec-376b012cc219\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31dada17d2bfd9fb0e40e0e67d3d41379a6dabe9f1a7db2b2367a66d120d7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbpgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmfps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:41Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.008045 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31db1d28-81c8-4eae-989c-49168bd4e711\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f0ff77c32912e767410211079dffc4e0681cf67e040e2cff227825813908d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://675507d9779ef73f4e25201102562a79a43654bce4a6e3b363f03e0e0b697578\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-15T05:37:33Z\\\",\\\"message\\\":\\\"le observer\\\\nW1215 05:37:33.642096 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1215 05:37:33.642525 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1215 05:37:33.645063 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1979444596/tls.crt::/tmp/serving-cert-1979444596/tls.key\\\\\\\"\\\\nI1215 05:37:33.801352 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1215 05:37:33.804097 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1215 05:37:33.804116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1215 05:37:33.804137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1215 05:37:33.804142 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1215 05:37:33.809816 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1215 05:37:33.809843 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1215 05:37:33.809857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1215 05:37:33.809860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1215 05:37:33.809862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1215 05:37:33.810091 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1215 05:37:33.812167 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:42Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.017101 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8b29ca57cdee8b65a3f3a0536e36f0e7e4666850221e623ca0f8186fd18e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:42Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.026006 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://582ccb05087bcbafc98ab0ee2114e9045b2ff301e10ba99cf8ddcd285cae9d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a10d569d887d2d91a5cb36b7193d4bfecf6a3177e238396b8863806aacf9df19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:42Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.035844 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02b1ad69-88a4-40e6-a7c4-409b8d42f0cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317d78781624437d29c71b7ca4fa83e80bd20b5f05ab3ed6d1f7177abf0b77d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe7a2e5d3edede6651bef4bed5fb61df5e777cb2da607c135c9e529ad6d58fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://253522075c9e770e558c77f98b826cbb502f402d3806f640dc78d752baf574c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23a7828ef5239d7d64be1ee51a07c50e713808269c5ba2c94fde9e3a2470713a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:42Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.045505 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:42Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.055676 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a223bb34540a7bc57028079a1b97da25865265d93931192b6be84a5625714aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:42Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.094103 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.094133 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.094142 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.094154 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.094165 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:42Z","lastTransitionTime":"2025-12-15T05:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.196144 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.196189 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.196202 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.196220 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.196231 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:42Z","lastTransitionTime":"2025-12-15T05:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.298204 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.298246 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.298254 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.298270 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.298279 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:42Z","lastTransitionTime":"2025-12-15T05:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.327539 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 15 05:37:42 crc kubenswrapper[4747]: E1215 05:37:42.327739 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-15 05:37:50.327720634 +0000 UTC m=+34.024232550 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.400141 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.400180 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.400191 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.400209 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.400220 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:42Z","lastTransitionTime":"2025-12-15T05:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.428606 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.428641 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.428669 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.428696 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:37:42 crc kubenswrapper[4747]: E1215 05:37:42.428764 4747 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 15 05:37:42 crc kubenswrapper[4747]: E1215 05:37:42.428792 4747 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 15 05:37:42 crc kubenswrapper[4747]: E1215 05:37:42.428819 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 15 05:37:42 crc kubenswrapper[4747]: E1215 05:37:42.428841 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 15 05:37:42 crc kubenswrapper[4747]: E1215 05:37:42.428850 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-15 05:37:50.428836334 +0000 UTC m=+34.125348251 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 15 05:37:42 crc kubenswrapper[4747]: E1215 05:37:42.428857 4747 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 15 05:37:42 crc kubenswrapper[4747]: E1215 05:37:42.428870 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-15 05:37:50.428863755 +0000 UTC m=+34.125375673 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 15 05:37:42 crc kubenswrapper[4747]: E1215 05:37:42.428907 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-15 05:37:50.428890155 +0000 UTC m=+34.125402072 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 15 05:37:42 crc kubenswrapper[4747]: E1215 05:37:42.429014 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 15 05:37:42 crc kubenswrapper[4747]: E1215 05:37:42.429058 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 15 05:37:42 crc kubenswrapper[4747]: E1215 05:37:42.429075 4747 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 15 05:37:42 crc kubenswrapper[4747]: E1215 05:37:42.429181 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-15 05:37:50.429155994 +0000 UTC m=+34.125667912 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.503297 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.503342 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.503353 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.503373 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.503388 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:42Z","lastTransitionTime":"2025-12-15T05:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.606317 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.606366 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.606377 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.606397 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.606415 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:42Z","lastTransitionTime":"2025-12-15T05:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.628635 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.628689 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:37:42 crc kubenswrapper[4747]: E1215 05:37:42.628780 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.628849 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:37:42 crc kubenswrapper[4747]: E1215 05:37:42.629094 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:37:42 crc kubenswrapper[4747]: E1215 05:37:42.629240 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.708871 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.708912 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.708971 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.708989 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.709000 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:42Z","lastTransitionTime":"2025-12-15T05:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.801340 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-82lhw_2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7/ovnkube-controller/0.log" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.804526 4747 generic.go:334] "Generic (PLEG): container finished" podID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerID="9fd98281708fa3adc857d8e86266ea930266a36e53a6df1beda1674927e2b7c7" exitCode=1 Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.804564 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" event={"ID":"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7","Type":"ContainerDied","Data":"9fd98281708fa3adc857d8e86266ea930266a36e53a6df1beda1674927e2b7c7"} Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.805541 4747 scope.go:117] "RemoveContainer" containerID="9fd98281708fa3adc857d8e86266ea930266a36e53a6df1beda1674927e2b7c7" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.814779 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.814844 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.814858 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.814882 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.814899 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:42Z","lastTransitionTime":"2025-12-15T05:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.816822 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a223bb34540a7bc57028079a1b97da25865265d93931192b6be84a5625714aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:42Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.828520 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pc5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39f3b6ddce69ca768ed97f2305771f13fdcc0608fd58b91ee8f9c000e6786af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pc5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:42Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.836731 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d50e5c9-7ce9-40c0-b942-01031654d27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3d9fff925cadcb3478997fe799c7fde5e75f670ad1309f5c6b15c3c534b673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8e211d6177b24044383a0cc22cceb80f6442489a5d2b4adbafc3d36637b3a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nldtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:42Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.856276 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fd98281708fa3adc857d8e86266ea930266a36e53a6df1beda1674927e2b7c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fd98281708fa3adc857d8e86266ea930266a36e53a6df1beda1674927e2b7c7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-15T05:37:42Z\\\",\\\"message\\\":\\\"ending *v1.Namespace event handler 1 for removal\\\\nI1215 05:37:42.760959 6044 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1215 05:37:42.746757 6044 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1215 05:37:42.761013 6044 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1215 05:37:42.761021 6044 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1215 05:37:42.761210 6044 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1215 05:37:42.761230 6044 factory.go:656] Stopping watch factory\\\\nI1215 05:37:42.761250 6044 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1215 05:37:42.761262 6044 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1215 05:37:42.761272 6044 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1215 05:37:42.761279 6044 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1215 05:37:42.761286 6044 handler.go:208] Removed *v1.Node event handler 2\\\\nI1215 05:37:42.761292 6044 handler.go:208] Removed *v1.Node event handler 7\\\\nI1215 05:37:42.746786 6044 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1215 05:37:42.746860 6044 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-82lhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:42Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.867868 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:42Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.877656 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:42Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.885616 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p2w9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abdca76-2fcd-44fc-a09d-ded3084306d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e6c91da690038d3b65310ea18371acab81ebb6542f2072f4ac4e08f74f04de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fw7nq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p2w9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:42Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.900442 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmfps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89350c5d-9a77-499e-81ec-376b012cc219\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31dada17d2bfd9fb0e40e0e67d3d41379a6dabe9f1a7db2b2367a66d120d7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbpgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmfps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:42Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.910753 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31db1d28-81c8-4eae-989c-49168bd4e711\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f0ff77c32912e767410211079dffc4e0681cf67e040e2cff227825813908d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://675507d9779ef73f4e25201102562a79a43654bce4a6e3b363f03e0e0b697578\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-15T05:37:33Z\\\",\\\"message\\\":\\\"le observer\\\\nW1215 05:37:33.642096 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1215 05:37:33.642525 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1215 05:37:33.645063 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1979444596/tls.crt::/tmp/serving-cert-1979444596/tls.key\\\\\\\"\\\\nI1215 05:37:33.801352 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1215 05:37:33.804097 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1215 05:37:33.804116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1215 05:37:33.804137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1215 05:37:33.804142 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1215 05:37:33.809816 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1215 05:37:33.809843 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1215 05:37:33.809857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1215 05:37:33.809860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1215 05:37:33.809862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1215 05:37:33.810091 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1215 05:37:33.812167 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:42Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.917817 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.917862 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.917873 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.917889 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.917902 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:42Z","lastTransitionTime":"2025-12-15T05:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.922818 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8b29ca57cdee8b65a3f3a0536e36f0e7e4666850221e623ca0f8186fd18e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:42Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.932229 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://582ccb05087bcbafc98ab0ee2114e9045b2ff301e10ba99cf8ddcd285cae9d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a10d569d887d2d91a5cb36b7193d4bfecf6a3177e238396b8863806aacf9df19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:42Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.939790 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cltgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3c83e90-bb8c-4909-9633-8f59ca12db6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5babf7321502d73aabf3a859529005a01ff938a5bdb4032d6eee6ace84be575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg2m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cltgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:42Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.964543 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02b1ad69-88a4-40e6-a7c4-409b8d42f0cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317d78781624437d29c71b7ca4fa83e80bd20b5f05ab3ed6d1f7177abf0b77d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe7a2e5d3edede6651bef4bed5fb61df5e777cb2da607c135c9e529ad6d58fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://253522075c9e770e558c77f98b826cbb502f402d3806f640dc78d752baf574c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23a7828ef5239d7d64be1ee51a07c50e713808269c5ba2c94fde9e3a2470713a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:42Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:42 crc kubenswrapper[4747]: I1215 05:37:42.984709 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:42Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.020554 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.020601 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.020612 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.020633 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.020644 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:43Z","lastTransitionTime":"2025-12-15T05:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.123662 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.123711 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.123721 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.123741 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.123755 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:43Z","lastTransitionTime":"2025-12-15T05:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.225625 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.225664 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.225672 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.225688 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.225701 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:43Z","lastTransitionTime":"2025-12-15T05:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.327432 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.327476 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.327489 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.327510 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.327524 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:43Z","lastTransitionTime":"2025-12-15T05:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.429514 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.429557 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.429567 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.429582 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.429594 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:43Z","lastTransitionTime":"2025-12-15T05:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.531325 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.531366 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.531377 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.531399 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.531409 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:43Z","lastTransitionTime":"2025-12-15T05:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.633971 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.634025 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.634038 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.634055 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.634071 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:43Z","lastTransitionTime":"2025-12-15T05:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.736118 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.736159 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.736170 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.736190 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.736202 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:43Z","lastTransitionTime":"2025-12-15T05:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.808377 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-82lhw_2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7/ovnkube-controller/1.log" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.809037 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-82lhw_2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7/ovnkube-controller/0.log" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.812355 4747 generic.go:334] "Generic (PLEG): container finished" podID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerID="61cb963b01ba342f581af75f1785b8dd147c21a60f563dee8eff960476970090" exitCode=1 Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.812411 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" event={"ID":"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7","Type":"ContainerDied","Data":"61cb963b01ba342f581af75f1785b8dd147c21a60f563dee8eff960476970090"} Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.812489 4747 scope.go:117] "RemoveContainer" containerID="9fd98281708fa3adc857d8e86266ea930266a36e53a6df1beda1674927e2b7c7" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.813154 4747 scope.go:117] "RemoveContainer" containerID="61cb963b01ba342f581af75f1785b8dd147c21a60f563dee8eff960476970090" Dec 15 05:37:43 crc kubenswrapper[4747]: E1215 05:37:43.813370 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-82lhw_openshift-ovn-kubernetes(2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.833245 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61cb963b01ba342f581af75f1785b8dd147c21a60f563dee8eff960476970090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fd98281708fa3adc857d8e86266ea930266a36e53a6df1beda1674927e2b7c7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-15T05:37:42Z\\\",\\\"message\\\":\\\"ending *v1.Namespace event handler 1 for removal\\\\nI1215 05:37:42.760959 6044 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1215 05:37:42.746757 6044 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1215 05:37:42.761013 6044 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1215 05:37:42.761021 6044 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1215 05:37:42.761210 6044 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1215 05:37:42.761230 6044 factory.go:656] Stopping watch factory\\\\nI1215 05:37:42.761250 6044 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1215 05:37:42.761262 6044 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1215 05:37:42.761272 6044 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1215 05:37:42.761279 6044 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1215 05:37:42.761286 6044 handler.go:208] Removed *v1.Node event handler 2\\\\nI1215 05:37:42.761292 6044 handler.go:208] Removed *v1.Node event handler 7\\\\nI1215 05:37:42.746786 6044 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1215 05:37:42.746860 6044 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61cb963b01ba342f581af75f1785b8dd147c21a60f563dee8eff960476970090\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-15T05:37:43Z\\\",\\\"message\\\":\\\" 6167 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1215 05:37:43.533647 6167 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1215 05:37:43.533740 6167 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1215 05:37:43.533792 6167 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1215 05:37:43.533820 6167 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1215 05:37:43.533832 6167 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1215 05:37:43.533880 6167 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1215 05:37:43.533911 6167 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1215 05:37:43.533913 6167 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1215 05:37:43.533946 6167 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1215 05:37:43.533946 6167 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1215 05:37:43.533952 6167 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1215 05:37:43.533960 6167 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1215 05:37:43.533970 6167 handler.go:208] Removed *v1.Node event handler 2\\\\nI1215 05:37:43.534022 6167 factory.go:656] Stopping watch factory\\\\nI1215 05:37:43.534039 6167 ovnkube.go:599] Stopped ovnkube\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-82lhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:43Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.837840 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.837867 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.837877 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.837892 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.837903 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:43Z","lastTransitionTime":"2025-12-15T05:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.843454 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:43Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.853423 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:43Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.868564 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pc5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39f3b6ddce69ca768ed97f2305771f13fdcc0608fd58b91ee8f9c000e6786af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pc5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:43Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.879906 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d50e5c9-7ce9-40c0-b942-01031654d27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3d9fff925cadcb3478997fe799c7fde5e75f670ad1309f5c6b15c3c534b673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8e211d6177b24044383a0cc22cceb80f6442489a5d2b4adbafc3d36637b3a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nldtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:43Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.892080 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31db1d28-81c8-4eae-989c-49168bd4e711\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f0ff77c32912e767410211079dffc4e0681cf67e040e2cff227825813908d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://675507d9779ef73f4e25201102562a79a43654bce4a6e3b363f03e0e0b697578\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-15T05:37:33Z\\\",\\\"message\\\":\\\"le observer\\\\nW1215 05:37:33.642096 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1215 05:37:33.642525 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1215 05:37:33.645063 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1979444596/tls.crt::/tmp/serving-cert-1979444596/tls.key\\\\\\\"\\\\nI1215 05:37:33.801352 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1215 05:37:33.804097 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1215 05:37:33.804116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1215 05:37:33.804137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1215 05:37:33.804142 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1215 05:37:33.809816 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1215 05:37:33.809843 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1215 05:37:33.809857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1215 05:37:33.809860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1215 05:37:33.809862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1215 05:37:33.810091 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1215 05:37:33.812167 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:43Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.902785 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8b29ca57cdee8b65a3f3a0536e36f0e7e4666850221e623ca0f8186fd18e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:43Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.913072 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://582ccb05087bcbafc98ab0ee2114e9045b2ff301e10ba99cf8ddcd285cae9d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a10d569d887d2d91a5cb36b7193d4bfecf6a3177e238396b8863806aacf9df19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:43Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.921172 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cltgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3c83e90-bb8c-4909-9633-8f59ca12db6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5babf7321502d73aabf3a859529005a01ff938a5bdb4032d6eee6ace84be575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg2m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cltgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:43Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.929345 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p2w9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abdca76-2fcd-44fc-a09d-ded3084306d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e6c91da690038d3b65310ea18371acab81ebb6542f2072f4ac4e08f74f04de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fw7nq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p2w9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:43Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.939357 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmfps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89350c5d-9a77-499e-81ec-376b012cc219\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31dada17d2bfd9fb0e40e0e67d3d41379a6dabe9f1a7db2b2367a66d120d7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbpgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmfps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:43Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.940092 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.940125 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.940137 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.940155 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.940166 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:43Z","lastTransitionTime":"2025-12-15T05:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.948608 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02b1ad69-88a4-40e6-a7c4-409b8d42f0cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317d78781624437d29c71b7ca4fa83e80bd20b5f05ab3ed6d1f7177abf0b77d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe7a2e5d3edede6651bef4bed5fb61df5e777cb2da607c135c9e529ad6d58fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://253522075c9e770e558c77f98b826cbb502f402d3806f640dc78d752baf574c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23a7828ef5239d7d64be1ee51a07c50e713808269c5ba2c94fde9e3a2470713a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:43Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.958435 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:43Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:43 crc kubenswrapper[4747]: I1215 05:37:43.967672 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a223bb34540a7bc57028079a1b97da25865265d93931192b6be84a5625714aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:43Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.042769 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.042817 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.042829 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.042847 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.042859 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:44Z","lastTransitionTime":"2025-12-15T05:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.145285 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.145321 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.145333 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.145347 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.145356 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:44Z","lastTransitionTime":"2025-12-15T05:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.247115 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.247182 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.247197 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.247212 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.247225 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:44Z","lastTransitionTime":"2025-12-15T05:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.349507 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.349556 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.349566 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.349582 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.349591 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:44Z","lastTransitionTime":"2025-12-15T05:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.451401 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.451430 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.451440 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.451455 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.451467 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:44Z","lastTransitionTime":"2025-12-15T05:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.553624 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.553672 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.553684 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.553704 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.553716 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:44Z","lastTransitionTime":"2025-12-15T05:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.629015 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.629069 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.629015 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:37:44 crc kubenswrapper[4747]: E1215 05:37:44.629195 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:37:44 crc kubenswrapper[4747]: E1215 05:37:44.629288 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:37:44 crc kubenswrapper[4747]: E1215 05:37:44.629417 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.656149 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.656173 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.656183 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.656197 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.656208 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:44Z","lastTransitionTime":"2025-12-15T05:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.757814 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.757852 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.757861 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.757874 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.757885 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:44Z","lastTransitionTime":"2025-12-15T05:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.816862 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-82lhw_2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7/ovnkube-controller/1.log" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.820810 4747 scope.go:117] "RemoveContainer" containerID="61cb963b01ba342f581af75f1785b8dd147c21a60f563dee8eff960476970090" Dec 15 05:37:44 crc kubenswrapper[4747]: E1215 05:37:44.821081 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-82lhw_openshift-ovn-kubernetes(2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.834648 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02b1ad69-88a4-40e6-a7c4-409b8d42f0cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317d78781624437d29c71b7ca4fa83e80bd20b5f05ab3ed6d1f7177abf0b77d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe7a2e5d3edede6651bef4bed5fb61df5e777cb2da607c135c9e529ad6d58fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://253522075c9e770e558c77f98b826cbb502f402d3806f640dc78d752baf574c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23a7828ef5239d7d64be1ee51a07c50e713808269c5ba2c94fde9e3a2470713a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:44Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.843341 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:44Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.852288 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a223bb34540a7bc57028079a1b97da25865265d93931192b6be84a5625714aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:44Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.859907 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.860023 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.860097 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.860168 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.860235 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:44Z","lastTransitionTime":"2025-12-15T05:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.861603 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:44Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.869824 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:44Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.880033 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pc5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39f3b6ddce69ca768ed97f2305771f13fdcc0608fd58b91ee8f9c000e6786af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pc5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:44Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.887740 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d50e5c9-7ce9-40c0-b942-01031654d27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3d9fff925cadcb3478997fe799c7fde5e75f670ad1309f5c6b15c3c534b673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8e211d6177b24044383a0cc22cceb80f6442489a5d2b4adbafc3d36637b3a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nldtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:44Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.903042 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61cb963b01ba342f581af75f1785b8dd147c21a60f563dee8eff960476970090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61cb963b01ba342f581af75f1785b8dd147c21a60f563dee8eff960476970090\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-15T05:37:43Z\\\",\\\"message\\\":\\\" 6167 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1215 05:37:43.533647 6167 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1215 05:37:43.533740 6167 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1215 05:37:43.533792 6167 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1215 05:37:43.533820 6167 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1215 05:37:43.533832 6167 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1215 05:37:43.533880 6167 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1215 05:37:43.533911 6167 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1215 05:37:43.533913 6167 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1215 05:37:43.533946 6167 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1215 05:37:43.533946 6167 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1215 05:37:43.533952 6167 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1215 05:37:43.533960 6167 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1215 05:37:43.533970 6167 handler.go:208] Removed *v1.Node event handler 2\\\\nI1215 05:37:43.534022 6167 factory.go:656] Stopping watch factory\\\\nI1215 05:37:43.534039 6167 ovnkube.go:599] Stopped ovnkube\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-82lhw_openshift-ovn-kubernetes(2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-82lhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:44Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.911967 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31db1d28-81c8-4eae-989c-49168bd4e711\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f0ff77c32912e767410211079dffc4e0681cf67e040e2cff227825813908d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://675507d9779ef73f4e25201102562a79a43654bce4a6e3b363f03e0e0b697578\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-15T05:37:33Z\\\",\\\"message\\\":\\\"le observer\\\\nW1215 05:37:33.642096 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1215 05:37:33.642525 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1215 05:37:33.645063 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1979444596/tls.crt::/tmp/serving-cert-1979444596/tls.key\\\\\\\"\\\\nI1215 05:37:33.801352 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1215 05:37:33.804097 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1215 05:37:33.804116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1215 05:37:33.804137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1215 05:37:33.804142 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1215 05:37:33.809816 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1215 05:37:33.809843 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1215 05:37:33.809857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1215 05:37:33.809860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1215 05:37:33.809862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1215 05:37:33.810091 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1215 05:37:33.812167 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:44Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.920341 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8b29ca57cdee8b65a3f3a0536e36f0e7e4666850221e623ca0f8186fd18e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:44Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.928411 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://582ccb05087bcbafc98ab0ee2114e9045b2ff301e10ba99cf8ddcd285cae9d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a10d569d887d2d91a5cb36b7193d4bfecf6a3177e238396b8863806aacf9df19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:44Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.935365 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cltgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3c83e90-bb8c-4909-9633-8f59ca12db6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5babf7321502d73aabf3a859529005a01ff938a5bdb4032d6eee6ace84be575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg2m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cltgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:44Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.942618 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p2w9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abdca76-2fcd-44fc-a09d-ded3084306d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e6c91da690038d3b65310ea18371acab81ebb6542f2072f4ac4e08f74f04de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fw7nq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p2w9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:44Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.951731 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmfps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89350c5d-9a77-499e-81ec-376b012cc219\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31dada17d2bfd9fb0e40e0e67d3d41379a6dabe9f1a7db2b2367a66d120d7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbpgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmfps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:44Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.962166 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.962272 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.962339 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.962413 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:44 crc kubenswrapper[4747]: I1215 05:37:44.962482 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:44Z","lastTransitionTime":"2025-12-15T05:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.064182 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.064282 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.064350 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.064409 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.064468 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:45Z","lastTransitionTime":"2025-12-15T05:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.166382 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.166433 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.166448 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.166467 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.166482 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:45Z","lastTransitionTime":"2025-12-15T05:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.268784 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.268841 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.268852 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.268873 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.268886 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:45Z","lastTransitionTime":"2025-12-15T05:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.371469 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.371502 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.371511 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.371524 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.371536 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:45Z","lastTransitionTime":"2025-12-15T05:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.473624 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.473649 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.473659 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.473673 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.473682 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:45Z","lastTransitionTime":"2025-12-15T05:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.575689 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.576034 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.576047 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.576065 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.576080 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:45Z","lastTransitionTime":"2025-12-15T05:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.678165 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.678197 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.678207 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.678221 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.678230 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:45Z","lastTransitionTime":"2025-12-15T05:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.780060 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.780097 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.780107 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.780119 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.780129 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:45Z","lastTransitionTime":"2025-12-15T05:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.882026 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.882064 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.882074 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.882086 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.882096 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:45Z","lastTransitionTime":"2025-12-15T05:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.984430 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.984479 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.984496 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.984515 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:45 crc kubenswrapper[4747]: I1215 05:37:45.984527 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:45Z","lastTransitionTime":"2025-12-15T05:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.086010 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.086056 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.086065 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.086079 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.086091 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:46Z","lastTransitionTime":"2025-12-15T05:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.188275 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.188380 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.188440 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.188593 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.188673 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:46Z","lastTransitionTime":"2025-12-15T05:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.290774 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.290826 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.290836 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.290852 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.290863 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:46Z","lastTransitionTime":"2025-12-15T05:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.392268 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.392296 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.392306 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.392319 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.392328 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:46Z","lastTransitionTime":"2025-12-15T05:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.494850 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.494896 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.494910 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.494940 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.494954 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:46Z","lastTransitionTime":"2025-12-15T05:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.597096 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.597130 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.597143 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.597155 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.597166 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:46Z","lastTransitionTime":"2025-12-15T05:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.628970 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.629019 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.628970 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:37:46 crc kubenswrapper[4747]: E1215 05:37:46.629182 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:37:46 crc kubenswrapper[4747]: E1215 05:37:46.629091 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:37:46 crc kubenswrapper[4747]: E1215 05:37:46.629352 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.641079 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d50e5c9-7ce9-40c0-b942-01031654d27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3d9fff925cadcb3478997fe799c7fde5e75f670ad1309f5c6b15c3c534b673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8e211d6177b24044383a0cc22cceb80f6442489a5d2b4adbafc3d36637b3a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nldtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:46Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.656158 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61cb963b01ba342f581af75f1785b8dd147c21a60f563dee8eff960476970090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61cb963b01ba342f581af75f1785b8dd147c21a60f563dee8eff960476970090\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-15T05:37:43Z\\\",\\\"message\\\":\\\" 6167 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1215 05:37:43.533647 6167 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1215 05:37:43.533740 6167 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1215 05:37:43.533792 6167 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1215 05:37:43.533820 6167 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1215 05:37:43.533832 6167 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1215 05:37:43.533880 6167 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1215 05:37:43.533911 6167 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1215 05:37:43.533913 6167 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1215 05:37:43.533946 6167 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1215 05:37:43.533946 6167 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1215 05:37:43.533952 6167 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1215 05:37:43.533960 6167 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1215 05:37:43.533970 6167 handler.go:208] Removed *v1.Node event handler 2\\\\nI1215 05:37:43.534022 6167 factory.go:656] Stopping watch factory\\\\nI1215 05:37:43.534039 6167 ovnkube.go:599] Stopped ovnkube\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-82lhw_openshift-ovn-kubernetes(2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-82lhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:46Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.665648 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:46Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.674280 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:46Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.680363 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82d2t"] Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.680856 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82d2t" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.682682 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.682810 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.685808 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pc5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39f3b6ddce69ca768ed97f2305771f13fdcc0608fd58b91ee8f9c000e6786af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pc5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:46Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.696210 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmfps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89350c5d-9a77-499e-81ec-376b012cc219\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31dada17d2bfd9fb0e40e0e67d3d41379a6dabe9f1a7db2b2367a66d120d7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbpgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmfps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:46Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.698496 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.698525 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.698535 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.698550 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.698562 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:46Z","lastTransitionTime":"2025-12-15T05:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.707690 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31db1d28-81c8-4eae-989c-49168bd4e711\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f0ff77c32912e767410211079dffc4e0681cf67e040e2cff227825813908d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://675507d9779ef73f4e25201102562a79a43654bce4a6e3b363f03e0e0b697578\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-15T05:37:33Z\\\",\\\"message\\\":\\\"le observer\\\\nW1215 05:37:33.642096 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1215 05:37:33.642525 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1215 05:37:33.645063 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1979444596/tls.crt::/tmp/serving-cert-1979444596/tls.key\\\\\\\"\\\\nI1215 05:37:33.801352 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1215 05:37:33.804097 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1215 05:37:33.804116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1215 05:37:33.804137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1215 05:37:33.804142 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1215 05:37:33.809816 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1215 05:37:33.809843 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1215 05:37:33.809857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1215 05:37:33.809860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1215 05:37:33.809862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1215 05:37:33.810091 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1215 05:37:33.812167 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:46Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.716361 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8b29ca57cdee8b65a3f3a0536e36f0e7e4666850221e623ca0f8186fd18e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:46Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.725840 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://582ccb05087bcbafc98ab0ee2114e9045b2ff301e10ba99cf8ddcd285cae9d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a10d569d887d2d91a5cb36b7193d4bfecf6a3177e238396b8863806aacf9df19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:46Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.733349 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cltgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3c83e90-bb8c-4909-9633-8f59ca12db6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5babf7321502d73aabf3a859529005a01ff938a5bdb4032d6eee6ace84be575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg2m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cltgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:46Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.740656 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p2w9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abdca76-2fcd-44fc-a09d-ded3084306d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e6c91da690038d3b65310ea18371acab81ebb6542f2072f4ac4e08f74f04de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fw7nq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p2w9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:46Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.749937 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02b1ad69-88a4-40e6-a7c4-409b8d42f0cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317d78781624437d29c71b7ca4fa83e80bd20b5f05ab3ed6d1f7177abf0b77d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe7a2e5d3edede6651bef4bed5fb61df5e777cb2da607c135c9e529ad6d58fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://253522075c9e770e558c77f98b826cbb502f402d3806f640dc78d752baf574c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23a7828ef5239d7d64be1ee51a07c50e713808269c5ba2c94fde9e3a2470713a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:46Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.758256 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:46Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.767862 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a9b71b51-500c-4932-b19a-559ec3d15a5f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-82d2t\" (UID: \"a9b71b51-500c-4932-b19a-559ec3d15a5f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82d2t" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.767922 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a9b71b51-500c-4932-b19a-559ec3d15a5f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-82d2t\" (UID: \"a9b71b51-500c-4932-b19a-559ec3d15a5f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82d2t" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.767960 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjjhh\" (UniqueName: \"kubernetes.io/projected/a9b71b51-500c-4932-b19a-559ec3d15a5f-kube-api-access-qjjhh\") pod \"ovnkube-control-plane-749d76644c-82d2t\" (UID: \"a9b71b51-500c-4932-b19a-559ec3d15a5f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82d2t" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.768001 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a9b71b51-500c-4932-b19a-559ec3d15a5f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-82d2t\" (UID: \"a9b71b51-500c-4932-b19a-559ec3d15a5f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82d2t" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.769100 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a223bb34540a7bc57028079a1b97da25865265d93931192b6be84a5625714aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:46Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.777995 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a223bb34540a7bc57028079a1b97da25865265d93931192b6be84a5625714aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:46Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.787003 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:46Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.797944 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pc5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39f3b6ddce69ca768ed97f2305771f13fdcc0608fd58b91ee8f9c000e6786af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pc5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:46Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.800396 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.800454 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.800467 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.800483 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.800495 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:46Z","lastTransitionTime":"2025-12-15T05:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.811061 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d50e5c9-7ce9-40c0-b942-01031654d27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3d9fff925cadcb3478997fe799c7fde5e75f670ad1309f5c6b15c3c534b673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8e211d6177b24044383a0cc22cceb80f6442489a5d2b4adbafc3d36637b3a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nldtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:46Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.825136 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61cb963b01ba342f581af75f1785b8dd147c21a60f563dee8eff960476970090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61cb963b01ba342f581af75f1785b8dd147c21a60f563dee8eff960476970090\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-15T05:37:43Z\\\",\\\"message\\\":\\\" 6167 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1215 05:37:43.533647 6167 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1215 05:37:43.533740 6167 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1215 05:37:43.533792 6167 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1215 05:37:43.533820 6167 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1215 05:37:43.533832 6167 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1215 05:37:43.533880 6167 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1215 05:37:43.533911 6167 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1215 05:37:43.533913 6167 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1215 05:37:43.533946 6167 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1215 05:37:43.533946 6167 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1215 05:37:43.533952 6167 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1215 05:37:43.533960 6167 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1215 05:37:43.533970 6167 handler.go:208] Removed *v1.Node event handler 2\\\\nI1215 05:37:43.534022 6167 factory.go:656] Stopping watch factory\\\\nI1215 05:37:43.534039 6167 ovnkube.go:599] Stopped ovnkube\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-82lhw_openshift-ovn-kubernetes(2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-82lhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:46Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.833709 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:46Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.840318 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cltgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3c83e90-bb8c-4909-9633-8f59ca12db6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5babf7321502d73aabf3a859529005a01ff938a5bdb4032d6eee6ace84be575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg2m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cltgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:46Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.846791 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p2w9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abdca76-2fcd-44fc-a09d-ded3084306d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e6c91da690038d3b65310ea18371acab81ebb6542f2072f4ac4e08f74f04de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fw7nq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p2w9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:46Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.854576 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmfps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89350c5d-9a77-499e-81ec-376b012cc219\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31dada17d2bfd9fb0e40e0e67d3d41379a6dabe9f1a7db2b2367a66d120d7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbpgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmfps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:46Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.863098 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31db1d28-81c8-4eae-989c-49168bd4e711\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f0ff77c32912e767410211079dffc4e0681cf67e040e2cff227825813908d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://675507d9779ef73f4e25201102562a79a43654bce4a6e3b363f03e0e0b697578\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-15T05:37:33Z\\\",\\\"message\\\":\\\"le observer\\\\nW1215 05:37:33.642096 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1215 05:37:33.642525 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1215 05:37:33.645063 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1979444596/tls.crt::/tmp/serving-cert-1979444596/tls.key\\\\\\\"\\\\nI1215 05:37:33.801352 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1215 05:37:33.804097 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1215 05:37:33.804116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1215 05:37:33.804137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1215 05:37:33.804142 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1215 05:37:33.809816 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1215 05:37:33.809843 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1215 05:37:33.809857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1215 05:37:33.809860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1215 05:37:33.809862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1215 05:37:33.810091 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1215 05:37:33.812167 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:46Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.868804 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a9b71b51-500c-4932-b19a-559ec3d15a5f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-82d2t\" (UID: \"a9b71b51-500c-4932-b19a-559ec3d15a5f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82d2t" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.868831 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjjhh\" (UniqueName: \"kubernetes.io/projected/a9b71b51-500c-4932-b19a-559ec3d15a5f-kube-api-access-qjjhh\") pod \"ovnkube-control-plane-749d76644c-82d2t\" (UID: \"a9b71b51-500c-4932-b19a-559ec3d15a5f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82d2t" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.868869 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a9b71b51-500c-4932-b19a-559ec3d15a5f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-82d2t\" (UID: \"a9b71b51-500c-4932-b19a-559ec3d15a5f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82d2t" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.868895 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a9b71b51-500c-4932-b19a-559ec3d15a5f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-82d2t\" (UID: \"a9b71b51-500c-4932-b19a-559ec3d15a5f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82d2t" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.869427 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a9b71b51-500c-4932-b19a-559ec3d15a5f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-82d2t\" (UID: \"a9b71b51-500c-4932-b19a-559ec3d15a5f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82d2t" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.869498 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a9b71b51-500c-4932-b19a-559ec3d15a5f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-82d2t\" (UID: \"a9b71b51-500c-4932-b19a-559ec3d15a5f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82d2t" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.870973 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8b29ca57cdee8b65a3f3a0536e36f0e7e4666850221e623ca0f8186fd18e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:46Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.875331 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a9b71b51-500c-4932-b19a-559ec3d15a5f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-82d2t\" (UID: \"a9b71b51-500c-4932-b19a-559ec3d15a5f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82d2t" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.880199 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://582ccb05087bcbafc98ab0ee2114e9045b2ff301e10ba99cf8ddcd285cae9d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a10d569d887d2d91a5cb36b7193d4bfecf6a3177e238396b8863806aacf9df19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:46Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.881764 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjjhh\" (UniqueName: \"kubernetes.io/projected/a9b71b51-500c-4932-b19a-559ec3d15a5f-kube-api-access-qjjhh\") pod \"ovnkube-control-plane-749d76644c-82d2t\" (UID: \"a9b71b51-500c-4932-b19a-559ec3d15a5f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82d2t" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.888161 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02b1ad69-88a4-40e6-a7c4-409b8d42f0cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317d78781624437d29c71b7ca4fa83e80bd20b5f05ab3ed6d1f7177abf0b77d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe7a2e5d3edede6651bef4bed5fb61df5e777cb2da607c135c9e529ad6d58fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://253522075c9e770e558c77f98b826cbb502f402d3806f640dc78d752baf574c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23a7828ef5239d7d64be1ee51a07c50e713808269c5ba2c94fde9e3a2470713a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:46Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.895428 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:46Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.902705 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82d2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9b71b51-500c-4932-b19a-559ec3d15a5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjjhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjjhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-82d2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:46Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.903358 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.903394 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.903405 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.903421 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.903432 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:46Z","lastTransitionTime":"2025-12-15T05:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:46 crc kubenswrapper[4747]: I1215 05:37:46.991006 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82d2t" Dec 15 05:37:47 crc kubenswrapper[4747]: W1215 05:37:47.005155 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9b71b51_500c_4932_b19a_559ec3d15a5f.slice/crio-9502f687b0940466fedc2d60c865550e833a0132cf913290c67617a93bff857a WatchSource:0}: Error finding container 9502f687b0940466fedc2d60c865550e833a0132cf913290c67617a93bff857a: Status 404 returned error can't find the container with id 9502f687b0940466fedc2d60c865550e833a0132cf913290c67617a93bff857a Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.006187 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.006331 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.006413 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.006485 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.006548 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:47Z","lastTransitionTime":"2025-12-15T05:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.109126 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.109158 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.109169 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.109188 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.109199 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:47Z","lastTransitionTime":"2025-12-15T05:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.211899 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.211962 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.211979 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.212001 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.212015 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:47Z","lastTransitionTime":"2025-12-15T05:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.300487 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.300537 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.300547 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.300566 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.300579 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:47Z","lastTransitionTime":"2025-12-15T05:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:47 crc kubenswrapper[4747]: E1215 05:37:47.314255 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4ef03cda-5cb6-4966-bf0f-23d213ae8ebc\\\",\\\"systemUUID\\\":\\\"f6e6c4c5-517c-43b9-abbe-241c399d7f32\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:47Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.317098 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.317130 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.317140 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.317150 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.317160 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:47Z","lastTransitionTime":"2025-12-15T05:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:47 crc kubenswrapper[4747]: E1215 05:37:47.326466 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4ef03cda-5cb6-4966-bf0f-23d213ae8ebc\\\",\\\"systemUUID\\\":\\\"f6e6c4c5-517c-43b9-abbe-241c399d7f32\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:47Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.332743 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.332775 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.332786 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.332805 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.332813 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:47Z","lastTransitionTime":"2025-12-15T05:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:47 crc kubenswrapper[4747]: E1215 05:37:47.342013 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4ef03cda-5cb6-4966-bf0f-23d213ae8ebc\\\",\\\"systemUUID\\\":\\\"f6e6c4c5-517c-43b9-abbe-241c399d7f32\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:47Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.344592 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.344683 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.344758 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.344833 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.344890 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:47Z","lastTransitionTime":"2025-12-15T05:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:47 crc kubenswrapper[4747]: E1215 05:37:47.352780 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4ef03cda-5cb6-4966-bf0f-23d213ae8ebc\\\",\\\"systemUUID\\\":\\\"f6e6c4c5-517c-43b9-abbe-241c399d7f32\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:47Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.359318 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.359355 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.359366 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.359377 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.359386 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:47Z","lastTransitionTime":"2025-12-15T05:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:47 crc kubenswrapper[4747]: E1215 05:37:47.367047 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4ef03cda-5cb6-4966-bf0f-23d213ae8ebc\\\",\\\"systemUUID\\\":\\\"f6e6c4c5-517c-43b9-abbe-241c399d7f32\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:47Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:47 crc kubenswrapper[4747]: E1215 05:37:47.367158 4747 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.368161 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.368189 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.368200 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.368209 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.368217 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:47Z","lastTransitionTime":"2025-12-15T05:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.470361 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.470394 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.470404 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.470417 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.470427 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:47Z","lastTransitionTime":"2025-12-15T05:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.571736 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.571769 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.571779 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.571793 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.571814 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:47Z","lastTransitionTime":"2025-12-15T05:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.674045 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.674073 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.674083 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.674095 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.674104 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:47Z","lastTransitionTime":"2025-12-15T05:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.744743 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-4nn8g"] Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.745472 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:37:47 crc kubenswrapper[4747]: E1215 05:37:47.745564 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.755196 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8b29ca57cdee8b65a3f3a0536e36f0e7e4666850221e623ca0f8186fd18e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:47Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.766309 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://582ccb05087bcbafc98ab0ee2114e9045b2ff301e10ba99cf8ddcd285cae9d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a10d569d887d2d91a5cb36b7193d4bfecf6a3177e238396b8863806aacf9df19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:47Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.772835 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cltgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3c83e90-bb8c-4909-9633-8f59ca12db6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5babf7321502d73aabf3a859529005a01ff938a5bdb4032d6eee6ace84be575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg2m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cltgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:47Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.775613 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.775634 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.775643 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.775655 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.775664 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:47Z","lastTransitionTime":"2025-12-15T05:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.779819 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p2w9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abdca76-2fcd-44fc-a09d-ded3084306d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e6c91da690038d3b65310ea18371acab81ebb6542f2072f4ac4e08f74f04de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fw7nq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p2w9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:47Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.789032 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmfps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89350c5d-9a77-499e-81ec-376b012cc219\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31dada17d2bfd9fb0e40e0e67d3d41379a6dabe9f1a7db2b2367a66d120d7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbpgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmfps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:47Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.796745 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4nn8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca0b2d2-cd19-409a-aa6d-df8b295adf62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bn8pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bn8pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4nn8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:47Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.806576 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31db1d28-81c8-4eae-989c-49168bd4e711\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f0ff77c32912e767410211079dffc4e0681cf67e040e2cff227825813908d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://675507d9779ef73f4e25201102562a79a43654bce4a6e3b363f03e0e0b697578\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-15T05:37:33Z\\\",\\\"message\\\":\\\"le observer\\\\nW1215 05:37:33.642096 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1215 05:37:33.642525 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1215 05:37:33.645063 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1979444596/tls.crt::/tmp/serving-cert-1979444596/tls.key\\\\\\\"\\\\nI1215 05:37:33.801352 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1215 05:37:33.804097 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1215 05:37:33.804116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1215 05:37:33.804137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1215 05:37:33.804142 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1215 05:37:33.809816 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1215 05:37:33.809843 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1215 05:37:33.809857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1215 05:37:33.809860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1215 05:37:33.809862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1215 05:37:33.810091 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1215 05:37:33.812167 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:47Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.815292 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:47Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.823166 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82d2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9b71b51-500c-4932-b19a-559ec3d15a5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjjhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjjhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-82d2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:47Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.829501 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82d2t" event={"ID":"a9b71b51-500c-4932-b19a-559ec3d15a5f","Type":"ContainerStarted","Data":"60e44712601c304d309e50b76c61dba25a4fd6d982f6dd1df36fb046b0473bb3"} Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.829606 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82d2t" event={"ID":"a9b71b51-500c-4932-b19a-559ec3d15a5f","Type":"ContainerStarted","Data":"e4bc1257d5ff3f04dcdce005fadc221c444afadbe862174495f6191537a58970"} Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.829680 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82d2t" event={"ID":"a9b71b51-500c-4932-b19a-559ec3d15a5f","Type":"ContainerStarted","Data":"9502f687b0940466fedc2d60c865550e833a0132cf913290c67617a93bff857a"} Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.831978 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02b1ad69-88a4-40e6-a7c4-409b8d42f0cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317d78781624437d29c71b7ca4fa83e80bd20b5f05ab3ed6d1f7177abf0b77d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe7a2e5d3edede6651bef4bed5fb61df5e777cb2da607c135c9e529ad6d58fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://253522075c9e770e558c77f98b826cbb502f402d3806f640dc78d752baf574c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23a7828ef5239d7d64be1ee51a07c50e713808269c5ba2c94fde9e3a2470713a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:47Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.840678 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a223bb34540a7bc57028079a1b97da25865265d93931192b6be84a5625714aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:47Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.849421 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:47Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.858737 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:47Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.869160 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pc5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39f3b6ddce69ca768ed97f2305771f13fdcc0608fd58b91ee8f9c000e6786af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pc5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:47Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.877822 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fca0b2d2-cd19-409a-aa6d-df8b295adf62-metrics-certs\") pod \"network-metrics-daemon-4nn8g\" (UID: \"fca0b2d2-cd19-409a-aa6d-df8b295adf62\") " pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.877874 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn8pc\" (UniqueName: \"kubernetes.io/projected/fca0b2d2-cd19-409a-aa6d-df8b295adf62-kube-api-access-bn8pc\") pod \"network-metrics-daemon-4nn8g\" (UID: \"fca0b2d2-cd19-409a-aa6d-df8b295adf62\") " pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.879095 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.879137 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.879148 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.879168 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.879182 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:47Z","lastTransitionTime":"2025-12-15T05:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.881951 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d50e5c9-7ce9-40c0-b942-01031654d27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3d9fff925cadcb3478997fe799c7fde5e75f670ad1309f5c6b15c3c534b673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8e211d6177b24044383a0cc22cceb80f6442489a5d2b4adbafc3d36637b3a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nldtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:47Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.894904 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61cb963b01ba342f581af75f1785b8dd147c21a60f563dee8eff960476970090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61cb963b01ba342f581af75f1785b8dd147c21a60f563dee8eff960476970090\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-15T05:37:43Z\\\",\\\"message\\\":\\\" 6167 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1215 05:37:43.533647 6167 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1215 05:37:43.533740 6167 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1215 05:37:43.533792 6167 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1215 05:37:43.533820 6167 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1215 05:37:43.533832 6167 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1215 05:37:43.533880 6167 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1215 05:37:43.533911 6167 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1215 05:37:43.533913 6167 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1215 05:37:43.533946 6167 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1215 05:37:43.533946 6167 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1215 05:37:43.533952 6167 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1215 05:37:43.533960 6167 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1215 05:37:43.533970 6167 handler.go:208] Removed *v1.Node event handler 2\\\\nI1215 05:37:43.534022 6167 factory.go:656] Stopping watch factory\\\\nI1215 05:37:43.534039 6167 ovnkube.go:599] Stopped ovnkube\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-82lhw_openshift-ovn-kubernetes(2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-82lhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:47Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.902718 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a223bb34540a7bc57028079a1b97da25865265d93931192b6be84a5625714aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:47Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.910954 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:47Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.919289 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:47Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.929577 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pc5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39f3b6ddce69ca768ed97f2305771f13fdcc0608fd58b91ee8f9c000e6786af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pc5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:47Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.937724 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d50e5c9-7ce9-40c0-b942-01031654d27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3d9fff925cadcb3478997fe799c7fde5e75f670ad1309f5c6b15c3c534b673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8e211d6177b24044383a0cc22cceb80f6442489a5d2b4adbafc3d36637b3a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nldtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:47Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.950354 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61cb963b01ba342f581af75f1785b8dd147c21a60f563dee8eff960476970090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61cb963b01ba342f581af75f1785b8dd147c21a60f563dee8eff960476970090\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-15T05:37:43Z\\\",\\\"message\\\":\\\" 6167 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1215 05:37:43.533647 6167 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1215 05:37:43.533740 6167 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1215 05:37:43.533792 6167 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1215 05:37:43.533820 6167 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1215 05:37:43.533832 6167 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1215 05:37:43.533880 6167 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1215 05:37:43.533911 6167 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1215 05:37:43.533913 6167 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1215 05:37:43.533946 6167 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1215 05:37:43.533946 6167 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1215 05:37:43.533952 6167 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1215 05:37:43.533960 6167 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1215 05:37:43.533970 6167 handler.go:208] Removed *v1.Node event handler 2\\\\nI1215 05:37:43.534022 6167 factory.go:656] Stopping watch factory\\\\nI1215 05:37:43.534039 6167 ovnkube.go:599] Stopped ovnkube\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-82lhw_openshift-ovn-kubernetes(2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-82lhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:47Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.960208 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31db1d28-81c8-4eae-989c-49168bd4e711\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f0ff77c32912e767410211079dffc4e0681cf67e040e2cff227825813908d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://675507d9779ef73f4e25201102562a79a43654bce4a6e3b363f03e0e0b697578\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-15T05:37:33Z\\\",\\\"message\\\":\\\"le observer\\\\nW1215 05:37:33.642096 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1215 05:37:33.642525 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1215 05:37:33.645063 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1979444596/tls.crt::/tmp/serving-cert-1979444596/tls.key\\\\\\\"\\\\nI1215 05:37:33.801352 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1215 05:37:33.804097 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1215 05:37:33.804116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1215 05:37:33.804137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1215 05:37:33.804142 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1215 05:37:33.809816 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1215 05:37:33.809843 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1215 05:37:33.809857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1215 05:37:33.809860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1215 05:37:33.809862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1215 05:37:33.810091 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1215 05:37:33.812167 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:47Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.969681 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8b29ca57cdee8b65a3f3a0536e36f0e7e4666850221e623ca0f8186fd18e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:47Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.978735 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fca0b2d2-cd19-409a-aa6d-df8b295adf62-metrics-certs\") pod \"network-metrics-daemon-4nn8g\" (UID: \"fca0b2d2-cd19-409a-aa6d-df8b295adf62\") " pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.978828 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://582ccb05087bcbafc98ab0ee2114e9045b2ff301e10ba99cf8ddcd285cae9d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a10d569d887d2d91a5cb36b7193d4bfecf6a3177e238396b8863806aacf9df19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:47Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:47 crc kubenswrapper[4747]: E1215 05:37:47.979021 4747 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 15 05:37:47 crc kubenswrapper[4747]: E1215 05:37:47.979129 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fca0b2d2-cd19-409a-aa6d-df8b295adf62-metrics-certs podName:fca0b2d2-cd19-409a-aa6d-df8b295adf62 nodeName:}" failed. No retries permitted until 2025-12-15 05:37:48.479102119 +0000 UTC m=+32.175614035 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fca0b2d2-cd19-409a-aa6d-df8b295adf62-metrics-certs") pod "network-metrics-daemon-4nn8g" (UID: "fca0b2d2-cd19-409a-aa6d-df8b295adf62") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.978856 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bn8pc\" (UniqueName: \"kubernetes.io/projected/fca0b2d2-cd19-409a-aa6d-df8b295adf62-kube-api-access-bn8pc\") pod \"network-metrics-daemon-4nn8g\" (UID: \"fca0b2d2-cd19-409a-aa6d-df8b295adf62\") " pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.981898 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.981954 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.981978 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.981997 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.982008 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:47Z","lastTransitionTime":"2025-12-15T05:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.991123 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cltgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3c83e90-bb8c-4909-9633-8f59ca12db6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5babf7321502d73aabf3a859529005a01ff938a5bdb4032d6eee6ace84be575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg2m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cltgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:47Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.997012 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn8pc\" (UniqueName: \"kubernetes.io/projected/fca0b2d2-cd19-409a-aa6d-df8b295adf62-kube-api-access-bn8pc\") pod \"network-metrics-daemon-4nn8g\" (UID: \"fca0b2d2-cd19-409a-aa6d-df8b295adf62\") " pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:37:47 crc kubenswrapper[4747]: I1215 05:37:47.999226 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p2w9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abdca76-2fcd-44fc-a09d-ded3084306d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e6c91da690038d3b65310ea18371acab81ebb6542f2072f4ac4e08f74f04de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fw7nq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p2w9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:47Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.008400 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmfps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89350c5d-9a77-499e-81ec-376b012cc219\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31dada17d2bfd9fb0e40e0e67d3d41379a6dabe9f1a7db2b2367a66d120d7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbpgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmfps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:48Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.015769 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4nn8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca0b2d2-cd19-409a-aa6d-df8b295adf62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bn8pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bn8pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4nn8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:48Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.031261 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02b1ad69-88a4-40e6-a7c4-409b8d42f0cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317d78781624437d29c71b7ca4fa83e80bd20b5f05ab3ed6d1f7177abf0b77d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe7a2e5d3edede6651bef4bed5fb61df5e777cb2da607c135c9e529ad6d58fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://253522075c9e770e558c77f98b826cbb502f402d3806f640dc78d752baf574c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23a7828ef5239d7d64be1ee51a07c50e713808269c5ba2c94fde9e3a2470713a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:48Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.041105 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:48Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.048638 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82d2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9b71b51-500c-4932-b19a-559ec3d15a5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4bc1257d5ff3f04dcdce005fadc221c444afadbe862174495f6191537a58970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjjhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e44712601c304d309e50b76c61dba25a4fd6d982f6dd1df36fb046b0473bb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjjhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-82d2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:48Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.084217 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.084248 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.084258 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.084274 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.084284 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:48Z","lastTransitionTime":"2025-12-15T05:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.187109 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.187146 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.187158 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.187179 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.187189 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:48Z","lastTransitionTime":"2025-12-15T05:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.289433 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.289477 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.289488 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.289507 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.289519 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:48Z","lastTransitionTime":"2025-12-15T05:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.391942 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.392145 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.392203 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.392268 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.392344 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:48Z","lastTransitionTime":"2025-12-15T05:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.484409 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fca0b2d2-cd19-409a-aa6d-df8b295adf62-metrics-certs\") pod \"network-metrics-daemon-4nn8g\" (UID: \"fca0b2d2-cd19-409a-aa6d-df8b295adf62\") " pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:37:48 crc kubenswrapper[4747]: E1215 05:37:48.484570 4747 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 15 05:37:48 crc kubenswrapper[4747]: E1215 05:37:48.484634 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fca0b2d2-cd19-409a-aa6d-df8b295adf62-metrics-certs podName:fca0b2d2-cd19-409a-aa6d-df8b295adf62 nodeName:}" failed. No retries permitted until 2025-12-15 05:37:49.484615205 +0000 UTC m=+33.181127122 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fca0b2d2-cd19-409a-aa6d-df8b295adf62-metrics-certs") pod "network-metrics-daemon-4nn8g" (UID: "fca0b2d2-cd19-409a-aa6d-df8b295adf62") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.494555 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.494595 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.494609 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.494628 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.494648 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:48Z","lastTransitionTime":"2025-12-15T05:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.597239 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.597283 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.597299 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.597316 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.597328 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:48Z","lastTransitionTime":"2025-12-15T05:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.629014 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.629059 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:37:48 crc kubenswrapper[4747]: E1215 05:37:48.629124 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.629014 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:37:48 crc kubenswrapper[4747]: E1215 05:37:48.629217 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:37:48 crc kubenswrapper[4747]: E1215 05:37:48.629280 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.700224 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.700274 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.700286 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.700307 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.700322 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:48Z","lastTransitionTime":"2025-12-15T05:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.801943 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.801974 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.801984 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.801996 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.802004 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:48Z","lastTransitionTime":"2025-12-15T05:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.903828 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.903874 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.903885 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.903904 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:48 crc kubenswrapper[4747]: I1215 05:37:48.903918 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:48Z","lastTransitionTime":"2025-12-15T05:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.006749 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.006787 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.006808 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.006826 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.006838 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:49Z","lastTransitionTime":"2025-12-15T05:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.108653 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.108695 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.108711 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.108725 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.108737 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:49Z","lastTransitionTime":"2025-12-15T05:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.210781 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.210831 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.210844 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.210859 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.210872 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:49Z","lastTransitionTime":"2025-12-15T05:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.312606 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.312655 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.312665 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.312680 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.312690 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:49Z","lastTransitionTime":"2025-12-15T05:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.414770 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.414817 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.414828 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.414842 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.414853 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:49Z","lastTransitionTime":"2025-12-15T05:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.494049 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fca0b2d2-cd19-409a-aa6d-df8b295adf62-metrics-certs\") pod \"network-metrics-daemon-4nn8g\" (UID: \"fca0b2d2-cd19-409a-aa6d-df8b295adf62\") " pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:37:49 crc kubenswrapper[4747]: E1215 05:37:49.494213 4747 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 15 05:37:49 crc kubenswrapper[4747]: E1215 05:37:49.494278 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fca0b2d2-cd19-409a-aa6d-df8b295adf62-metrics-certs podName:fca0b2d2-cd19-409a-aa6d-df8b295adf62 nodeName:}" failed. No retries permitted until 2025-12-15 05:37:51.494262586 +0000 UTC m=+35.190774503 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fca0b2d2-cd19-409a-aa6d-df8b295adf62-metrics-certs") pod "network-metrics-daemon-4nn8g" (UID: "fca0b2d2-cd19-409a-aa6d-df8b295adf62") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.517220 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.517267 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.517280 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.517297 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.517308 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:49Z","lastTransitionTime":"2025-12-15T05:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.622841 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.622889 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.622901 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.622941 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.622958 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:49Z","lastTransitionTime":"2025-12-15T05:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.628354 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:37:49 crc kubenswrapper[4747]: E1215 05:37:49.628492 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.725511 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.725557 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.725572 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.725584 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.725594 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:49Z","lastTransitionTime":"2025-12-15T05:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.827734 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.827781 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.827792 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.827819 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.827832 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:49Z","lastTransitionTime":"2025-12-15T05:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.929975 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.930013 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.930032 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.930051 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:49 crc kubenswrapper[4747]: I1215 05:37:49.930062 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:49Z","lastTransitionTime":"2025-12-15T05:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.032424 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.032484 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.032499 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.032517 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.032534 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:50Z","lastTransitionTime":"2025-12-15T05:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.134556 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.134603 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.134612 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.134627 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.134638 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:50Z","lastTransitionTime":"2025-12-15T05:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.236768 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.236804 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.236817 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.236827 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.236837 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:50Z","lastTransitionTime":"2025-12-15T05:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.338299 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.338330 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.338338 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.338363 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.338372 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:50Z","lastTransitionTime":"2025-12-15T05:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.401846 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 15 05:37:50 crc kubenswrapper[4747]: E1215 05:37:50.402061 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-15 05:38:06.402039857 +0000 UTC m=+50.098551774 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.440489 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.440525 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.440534 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.440549 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.440559 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:50Z","lastTransitionTime":"2025-12-15T05:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.503164 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.503214 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.503249 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.503276 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:37:50 crc kubenswrapper[4747]: E1215 05:37:50.503369 4747 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 15 05:37:50 crc kubenswrapper[4747]: E1215 05:37:50.503413 4747 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 15 05:37:50 crc kubenswrapper[4747]: E1215 05:37:50.503426 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 15 05:37:50 crc kubenswrapper[4747]: E1215 05:37:50.503454 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 15 05:37:50 crc kubenswrapper[4747]: E1215 05:37:50.503468 4747 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 15 05:37:50 crc kubenswrapper[4747]: E1215 05:37:50.503441 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-15 05:38:06.503426807 +0000 UTC m=+50.199938724 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 15 05:37:50 crc kubenswrapper[4747]: E1215 05:37:50.503506 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 15 05:37:50 crc kubenswrapper[4747]: E1215 05:37:50.503562 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 15 05:37:50 crc kubenswrapper[4747]: E1215 05:37:50.503517 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-15 05:38:06.503505826 +0000 UTC m=+50.200017743 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 15 05:37:50 crc kubenswrapper[4747]: E1215 05:37:50.503584 4747 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 15 05:37:50 crc kubenswrapper[4747]: E1215 05:37:50.503599 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-15 05:38:06.503585295 +0000 UTC m=+50.200097212 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 15 05:37:50 crc kubenswrapper[4747]: E1215 05:37:50.503664 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-15 05:38:06.503641731 +0000 UTC m=+50.200153679 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.542616 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.542643 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.542651 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.542662 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.542671 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:50Z","lastTransitionTime":"2025-12-15T05:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.628754 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.628772 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.628762 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:37:50 crc kubenswrapper[4747]: E1215 05:37:50.628883 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:37:50 crc kubenswrapper[4747]: E1215 05:37:50.629019 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:37:50 crc kubenswrapper[4747]: E1215 05:37:50.629103 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.644454 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.644488 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.644521 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.644540 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.644550 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:50Z","lastTransitionTime":"2025-12-15T05:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.746318 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.746379 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.746390 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.746404 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.746414 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:50Z","lastTransitionTime":"2025-12-15T05:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.848370 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.848406 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.848418 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.848432 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.848442 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:50Z","lastTransitionTime":"2025-12-15T05:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.950117 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.950165 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.950178 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.950192 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:50 crc kubenswrapper[4747]: I1215 05:37:50.950202 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:50Z","lastTransitionTime":"2025-12-15T05:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.052031 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.052076 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.052090 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.052106 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.052117 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:51Z","lastTransitionTime":"2025-12-15T05:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.154410 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.154453 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.154465 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.154481 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.154495 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:51Z","lastTransitionTime":"2025-12-15T05:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.255784 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.255824 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.255833 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.255846 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.255858 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:51Z","lastTransitionTime":"2025-12-15T05:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.357408 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.357434 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.357445 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.357455 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.357466 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:51Z","lastTransitionTime":"2025-12-15T05:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.459067 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.459108 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.459119 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.459137 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.459148 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:51Z","lastTransitionTime":"2025-12-15T05:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.510016 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fca0b2d2-cd19-409a-aa6d-df8b295adf62-metrics-certs\") pod \"network-metrics-daemon-4nn8g\" (UID: \"fca0b2d2-cd19-409a-aa6d-df8b295adf62\") " pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:37:51 crc kubenswrapper[4747]: E1215 05:37:51.510207 4747 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 15 05:37:51 crc kubenswrapper[4747]: E1215 05:37:51.510277 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fca0b2d2-cd19-409a-aa6d-df8b295adf62-metrics-certs podName:fca0b2d2-cd19-409a-aa6d-df8b295adf62 nodeName:}" failed. No retries permitted until 2025-12-15 05:37:55.510249826 +0000 UTC m=+39.206761743 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fca0b2d2-cd19-409a-aa6d-df8b295adf62-metrics-certs") pod "network-metrics-daemon-4nn8g" (UID: "fca0b2d2-cd19-409a-aa6d-df8b295adf62") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.561107 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.561137 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.561147 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.561161 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.561170 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:51Z","lastTransitionTime":"2025-12-15T05:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.628532 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:37:51 crc kubenswrapper[4747]: E1215 05:37:51.628647 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.663613 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.663652 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.663665 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.663680 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.663690 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:51Z","lastTransitionTime":"2025-12-15T05:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.765483 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.765530 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.765540 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.765555 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.765565 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:51Z","lastTransitionTime":"2025-12-15T05:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.872584 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.872634 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.872646 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.872661 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.872672 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:51Z","lastTransitionTime":"2025-12-15T05:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.975216 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.975272 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.975283 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.975302 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:51 crc kubenswrapper[4747]: I1215 05:37:51.975313 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:51Z","lastTransitionTime":"2025-12-15T05:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.077687 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.077733 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.077744 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.077764 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.077775 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:52Z","lastTransitionTime":"2025-12-15T05:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.180196 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.180233 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.180244 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.180256 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.180265 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:52Z","lastTransitionTime":"2025-12-15T05:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.282167 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.282202 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.282212 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.282227 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.282240 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:52Z","lastTransitionTime":"2025-12-15T05:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.384273 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.384308 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.384320 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.384338 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.384347 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:52Z","lastTransitionTime":"2025-12-15T05:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.486407 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.486440 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.486450 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.486464 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.486474 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:52Z","lastTransitionTime":"2025-12-15T05:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.588058 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.588100 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.588110 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.588129 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.588144 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:52Z","lastTransitionTime":"2025-12-15T05:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.629000 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.629076 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:37:52 crc kubenswrapper[4747]: E1215 05:37:52.629123 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.629168 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:37:52 crc kubenswrapper[4747]: E1215 05:37:52.629330 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:37:52 crc kubenswrapper[4747]: E1215 05:37:52.629432 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.689391 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.689431 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.689441 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.689453 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.689463 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:52Z","lastTransitionTime":"2025-12-15T05:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.792042 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.792077 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.792089 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.792100 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.792112 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:52Z","lastTransitionTime":"2025-12-15T05:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.893579 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.893608 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.893619 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.893630 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.893638 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:52Z","lastTransitionTime":"2025-12-15T05:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.995677 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.995728 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.995740 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.995759 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:52 crc kubenswrapper[4747]: I1215 05:37:52.995776 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:52Z","lastTransitionTime":"2025-12-15T05:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:53 crc kubenswrapper[4747]: I1215 05:37:53.097959 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:53 crc kubenswrapper[4747]: I1215 05:37:53.097998 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:53 crc kubenswrapper[4747]: I1215 05:37:53.098007 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:53 crc kubenswrapper[4747]: I1215 05:37:53.098022 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:53 crc kubenswrapper[4747]: I1215 05:37:53.098034 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:53Z","lastTransitionTime":"2025-12-15T05:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:53 crc kubenswrapper[4747]: I1215 05:37:53.199969 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:53 crc kubenswrapper[4747]: I1215 05:37:53.200016 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:53 crc kubenswrapper[4747]: I1215 05:37:53.200029 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:53 crc kubenswrapper[4747]: I1215 05:37:53.200043 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:53 crc kubenswrapper[4747]: I1215 05:37:53.200056 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:53Z","lastTransitionTime":"2025-12-15T05:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:53 crc kubenswrapper[4747]: I1215 05:37:53.301818 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:53 crc kubenswrapper[4747]: I1215 05:37:53.301847 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:53 crc kubenswrapper[4747]: I1215 05:37:53.301857 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:53 crc kubenswrapper[4747]: I1215 05:37:53.301869 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:53 crc kubenswrapper[4747]: I1215 05:37:53.301879 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:53Z","lastTransitionTime":"2025-12-15T05:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:53 crc kubenswrapper[4747]: I1215 05:37:53.404117 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:53 crc kubenswrapper[4747]: I1215 05:37:53.404174 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:53 crc kubenswrapper[4747]: I1215 05:37:53.404186 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:53 crc kubenswrapper[4747]: I1215 05:37:53.404196 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:53 crc kubenswrapper[4747]: I1215 05:37:53.404204 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:53Z","lastTransitionTime":"2025-12-15T05:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:53 crc kubenswrapper[4747]: I1215 05:37:53.506654 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:53 crc kubenswrapper[4747]: I1215 05:37:53.506700 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:53 crc kubenswrapper[4747]: I1215 05:37:53.506711 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:53 crc kubenswrapper[4747]: I1215 05:37:53.506725 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:53 crc kubenswrapper[4747]: I1215 05:37:53.506736 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:53Z","lastTransitionTime":"2025-12-15T05:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:53 crc kubenswrapper[4747]: I1215 05:37:53.608978 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:53 crc kubenswrapper[4747]: I1215 05:37:53.609022 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:53 crc kubenswrapper[4747]: I1215 05:37:53.609032 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:53 crc kubenswrapper[4747]: I1215 05:37:53.609045 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:53 crc kubenswrapper[4747]: I1215 05:37:53.609055 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:53Z","lastTransitionTime":"2025-12-15T05:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:53 crc kubenswrapper[4747]: I1215 05:37:53.628685 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:37:53 crc kubenswrapper[4747]: E1215 05:37:53.628779 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:37:53 crc kubenswrapper[4747]: I1215 05:37:53.711588 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:53 crc kubenswrapper[4747]: I1215 05:37:53.711624 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:53 crc kubenswrapper[4747]: I1215 05:37:53.711633 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:53 crc kubenswrapper[4747]: I1215 05:37:53.711649 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:53 crc kubenswrapper[4747]: I1215 05:37:53.711678 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:53Z","lastTransitionTime":"2025-12-15T05:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:53 crc kubenswrapper[4747]: I1215 05:37:53.813446 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:53 crc kubenswrapper[4747]: I1215 05:37:53.813490 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:53 crc kubenswrapper[4747]: I1215 05:37:53.813500 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:53 crc kubenswrapper[4747]: I1215 05:37:53.813518 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:53 crc kubenswrapper[4747]: I1215 05:37:53.813530 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:53Z","lastTransitionTime":"2025-12-15T05:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:53 crc kubenswrapper[4747]: I1215 05:37:53.914884 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:53 crc kubenswrapper[4747]: I1215 05:37:53.915076 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:53 crc kubenswrapper[4747]: I1215 05:37:53.915144 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:53 crc kubenswrapper[4747]: I1215 05:37:53.915206 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:53 crc kubenswrapper[4747]: I1215 05:37:53.915261 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:53Z","lastTransitionTime":"2025-12-15T05:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.016902 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.016949 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.016962 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.016975 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.016984 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:54Z","lastTransitionTime":"2025-12-15T05:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.118860 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.118991 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.119076 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.119144 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.119210 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:54Z","lastTransitionTime":"2025-12-15T05:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.221623 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.221661 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.221689 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.221704 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.221714 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:54Z","lastTransitionTime":"2025-12-15T05:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.324008 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.324137 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.324204 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.324268 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.324493 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:54Z","lastTransitionTime":"2025-12-15T05:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.432479 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.432601 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.432703 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.432810 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.432906 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:54Z","lastTransitionTime":"2025-12-15T05:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.535123 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.535234 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.535300 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.535366 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.535421 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:54Z","lastTransitionTime":"2025-12-15T05:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.628354 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.628365 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.628501 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:37:54 crc kubenswrapper[4747]: E1215 05:37:54.628648 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:37:54 crc kubenswrapper[4747]: E1215 05:37:54.628711 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:37:54 crc kubenswrapper[4747]: E1215 05:37:54.628856 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.637578 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.637604 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.637612 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.637624 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.637637 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:54Z","lastTransitionTime":"2025-12-15T05:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.739363 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.739401 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.739413 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.739432 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.739444 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:54Z","lastTransitionTime":"2025-12-15T05:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.841845 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.841882 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.841894 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.841907 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.841917 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:54Z","lastTransitionTime":"2025-12-15T05:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.943484 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.943527 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.943538 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.943556 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:54 crc kubenswrapper[4747]: I1215 05:37:54.943569 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:54Z","lastTransitionTime":"2025-12-15T05:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.045440 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.045477 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.045489 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.045501 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.045509 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:55Z","lastTransitionTime":"2025-12-15T05:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.147419 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.147450 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.147461 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.147474 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.147482 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:55Z","lastTransitionTime":"2025-12-15T05:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.249600 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.249631 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.249640 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.249658 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.249670 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:55Z","lastTransitionTime":"2025-12-15T05:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.351622 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.351643 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.351654 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.351665 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.351676 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:55Z","lastTransitionTime":"2025-12-15T05:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.453471 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.453516 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.453527 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.453537 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.453544 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:55Z","lastTransitionTime":"2025-12-15T05:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.546880 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fca0b2d2-cd19-409a-aa6d-df8b295adf62-metrics-certs\") pod \"network-metrics-daemon-4nn8g\" (UID: \"fca0b2d2-cd19-409a-aa6d-df8b295adf62\") " pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:37:55 crc kubenswrapper[4747]: E1215 05:37:55.547049 4747 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 15 05:37:55 crc kubenswrapper[4747]: E1215 05:37:55.547124 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fca0b2d2-cd19-409a-aa6d-df8b295adf62-metrics-certs podName:fca0b2d2-cd19-409a-aa6d-df8b295adf62 nodeName:}" failed. No retries permitted until 2025-12-15 05:38:03.54710687 +0000 UTC m=+47.243618787 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fca0b2d2-cd19-409a-aa6d-df8b295adf62-metrics-certs") pod "network-metrics-daemon-4nn8g" (UID: "fca0b2d2-cd19-409a-aa6d-df8b295adf62") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.554855 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.554881 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.554893 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.554907 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.554916 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:55Z","lastTransitionTime":"2025-12-15T05:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.571819 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.582435 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a223bb34540a7bc57028079a1b97da25865265d93931192b6be84a5625714aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:55Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.594233 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:55Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.603476 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:55Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.614527 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pc5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39f3b6ddce69ca768ed97f2305771f13fdcc0608fd58b91ee8f9c000e6786af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pc5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:55Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.623037 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d50e5c9-7ce9-40c0-b942-01031654d27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3d9fff925cadcb3478997fe799c7fde5e75f670ad1309f5c6b15c3c534b673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8e211d6177b24044383a0cc22cceb80f6442489a5d2b4adbafc3d36637b3a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nldtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:55Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.628304 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:37:55 crc kubenswrapper[4747]: E1215 05:37:55.628410 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.636948 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61cb963b01ba342f581af75f1785b8dd147c21a60f563dee8eff960476970090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61cb963b01ba342f581af75f1785b8dd147c21a60f563dee8eff960476970090\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-15T05:37:43Z\\\",\\\"message\\\":\\\" 6167 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1215 05:37:43.533647 6167 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1215 05:37:43.533740 6167 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1215 05:37:43.533792 6167 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1215 05:37:43.533820 6167 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1215 05:37:43.533832 6167 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1215 05:37:43.533880 6167 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1215 05:37:43.533911 6167 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1215 05:37:43.533913 6167 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1215 05:37:43.533946 6167 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1215 05:37:43.533946 6167 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1215 05:37:43.533952 6167 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1215 05:37:43.533960 6167 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1215 05:37:43.533970 6167 handler.go:208] Removed *v1.Node event handler 2\\\\nI1215 05:37:43.534022 6167 factory.go:656] Stopping watch factory\\\\nI1215 05:37:43.534039 6167 ovnkube.go:599] Stopped ovnkube\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-82lhw_openshift-ovn-kubernetes(2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-82lhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:55Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.646792 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://582ccb05087bcbafc98ab0ee2114e9045b2ff301e10ba99cf8ddcd285cae9d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a10d569d887d2d91a5cb36b7193d4bfecf6a3177e238396b8863806aacf9df19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:55Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.653877 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cltgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3c83e90-bb8c-4909-9633-8f59ca12db6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5babf7321502d73aabf3a859529005a01ff938a5bdb4032d6eee6ace84be575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg2m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cltgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:55Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.656510 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.656556 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.656568 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.656586 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.656597 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:55Z","lastTransitionTime":"2025-12-15T05:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.661457 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p2w9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abdca76-2fcd-44fc-a09d-ded3084306d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e6c91da690038d3b65310ea18371acab81ebb6542f2072f4ac4e08f74f04de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fw7nq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p2w9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:55Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.671984 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmfps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89350c5d-9a77-499e-81ec-376b012cc219\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31dada17d2bfd9fb0e40e0e67d3d41379a6dabe9f1a7db2b2367a66d120d7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbpgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmfps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:55Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.679737 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4nn8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca0b2d2-cd19-409a-aa6d-df8b295adf62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bn8pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bn8pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4nn8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:55Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.691355 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31db1d28-81c8-4eae-989c-49168bd4e711\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f0ff77c32912e767410211079dffc4e0681cf67e040e2cff227825813908d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://675507d9779ef73f4e25201102562a79a43654bce4a6e3b363f03e0e0b697578\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-15T05:37:33Z\\\",\\\"message\\\":\\\"le observer\\\\nW1215 05:37:33.642096 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1215 05:37:33.642525 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1215 05:37:33.645063 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1979444596/tls.crt::/tmp/serving-cert-1979444596/tls.key\\\\\\\"\\\\nI1215 05:37:33.801352 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1215 05:37:33.804097 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1215 05:37:33.804116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1215 05:37:33.804137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1215 05:37:33.804142 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1215 05:37:33.809816 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1215 05:37:33.809843 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1215 05:37:33.809857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1215 05:37:33.809860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1215 05:37:33.809862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1215 05:37:33.810091 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1215 05:37:33.812167 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:55Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.701140 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8b29ca57cdee8b65a3f3a0536e36f0e7e4666850221e623ca0f8186fd18e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:55Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.710419 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82d2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9b71b51-500c-4932-b19a-559ec3d15a5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4bc1257d5ff3f04dcdce005fadc221c444afadbe862174495f6191537a58970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjjhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e44712601c304d309e50b76c61dba25a4fd6d982f6dd1df36fb046b0473bb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjjhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-82d2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:55Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.719579 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02b1ad69-88a4-40e6-a7c4-409b8d42f0cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317d78781624437d29c71b7ca4fa83e80bd20b5f05ab3ed6d1f7177abf0b77d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe7a2e5d3edede6651bef4bed5fb61df5e777cb2da607c135c9e529ad6d58fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://253522075c9e770e558c77f98b826cbb502f402d3806f640dc78d752baf574c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23a7828ef5239d7d64be1ee51a07c50e713808269c5ba2c94fde9e3a2470713a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:55Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.728862 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:55Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.758371 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.758406 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.758420 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.758438 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.758449 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:55Z","lastTransitionTime":"2025-12-15T05:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.860673 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.860818 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.860895 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.861000 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.861163 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:55Z","lastTransitionTime":"2025-12-15T05:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.963633 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.963776 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.963791 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.963821 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:55 crc kubenswrapper[4747]: I1215 05:37:55.963831 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:55Z","lastTransitionTime":"2025-12-15T05:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.065975 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.066028 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.066038 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.066058 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.066070 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:56Z","lastTransitionTime":"2025-12-15T05:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.168591 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.168640 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.168650 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.168668 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.168682 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:56Z","lastTransitionTime":"2025-12-15T05:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.270309 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.270348 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.270357 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.270369 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.270382 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:56Z","lastTransitionTime":"2025-12-15T05:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.372121 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.372169 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.372180 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.372198 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.372209 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:56Z","lastTransitionTime":"2025-12-15T05:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.474789 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.474848 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.474859 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.474875 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.474886 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:56Z","lastTransitionTime":"2025-12-15T05:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.577099 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.577132 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.577146 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.577161 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.577171 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:56Z","lastTransitionTime":"2025-12-15T05:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.628436 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:37:56 crc kubenswrapper[4747]: E1215 05:37:56.628539 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.628858 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.628892 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:37:56 crc kubenswrapper[4747]: E1215 05:37:56.628921 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:37:56 crc kubenswrapper[4747]: E1215 05:37:56.629036 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.629694 4747 scope.go:117] "RemoveContainer" containerID="61cb963b01ba342f581af75f1785b8dd147c21a60f563dee8eff960476970090" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.641654 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:56Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.650435 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82d2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9b71b51-500c-4932-b19a-559ec3d15a5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4bc1257d5ff3f04dcdce005fadc221c444afadbe862174495f6191537a58970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjjhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e44712601c304d309e50b76c61dba25a4fd6d982f6dd1df36fb046b0473bb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjjhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-82d2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:56Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.660410 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02b1ad69-88a4-40e6-a7c4-409b8d42f0cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317d78781624437d29c71b7ca4fa83e80bd20b5f05ab3ed6d1f7177abf0b77d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe7a2e5d3edede6651bef4bed5fb61df5e777cb2da607c135c9e529ad6d58fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://253522075c9e770e558c77f98b826cbb502f402d3806f640dc78d752baf574c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23a7828ef5239d7d64be1ee51a07c50e713808269c5ba2c94fde9e3a2470713a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:56Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.669654 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a223bb34540a7bc57028079a1b97da25865265d93931192b6be84a5625714aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:56Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.678590 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:56Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.679097 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.679126 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.679138 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.679156 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.679168 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:56Z","lastTransitionTime":"2025-12-15T05:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.686870 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:56Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.696917 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pc5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39f3b6ddce69ca768ed97f2305771f13fdcc0608fd58b91ee8f9c000e6786af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pc5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:56Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.705307 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d50e5c9-7ce9-40c0-b942-01031654d27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3d9fff925cadcb3478997fe799c7fde5e75f670ad1309f5c6b15c3c534b673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8e211d6177b24044383a0cc22cceb80f6442489a5d2b4adbafc3d36637b3a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nldtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:56Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.718197 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61cb963b01ba342f581af75f1785b8dd147c21a60f563dee8eff960476970090\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61cb963b01ba342f581af75f1785b8dd147c21a60f563dee8eff960476970090\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-15T05:37:43Z\\\",\\\"message\\\":\\\" 6167 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1215 05:37:43.533647 6167 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1215 05:37:43.533740 6167 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1215 05:37:43.533792 6167 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1215 05:37:43.533820 6167 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1215 05:37:43.533832 6167 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1215 05:37:43.533880 6167 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1215 05:37:43.533911 6167 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1215 05:37:43.533913 6167 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1215 05:37:43.533946 6167 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1215 05:37:43.533946 6167 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1215 05:37:43.533952 6167 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1215 05:37:43.533960 6167 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1215 05:37:43.533970 6167 handler.go:208] Removed *v1.Node event handler 2\\\\nI1215 05:37:43.534022 6167 factory.go:656] Stopping watch factory\\\\nI1215 05:37:43.534039 6167 ovnkube.go:599] Stopped ovnkube\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-82lhw_openshift-ovn-kubernetes(2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-82lhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:56Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.726912 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8b29ca57cdee8b65a3f3a0536e36f0e7e4666850221e623ca0f8186fd18e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:56Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.736546 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://582ccb05087bcbafc98ab0ee2114e9045b2ff301e10ba99cf8ddcd285cae9d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a10d569d887d2d91a5cb36b7193d4bfecf6a3177e238396b8863806aacf9df19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:56Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.745829 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cltgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3c83e90-bb8c-4909-9633-8f59ca12db6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5babf7321502d73aabf3a859529005a01ff938a5bdb4032d6eee6ace84be575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg2m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cltgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:56Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.753992 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p2w9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abdca76-2fcd-44fc-a09d-ded3084306d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e6c91da690038d3b65310ea18371acab81ebb6542f2072f4ac4e08f74f04de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fw7nq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p2w9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:56Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.764424 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmfps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89350c5d-9a77-499e-81ec-376b012cc219\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31dada17d2bfd9fb0e40e0e67d3d41379a6dabe9f1a7db2b2367a66d120d7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbpgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmfps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:56Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.774268 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4nn8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca0b2d2-cd19-409a-aa6d-df8b295adf62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bn8pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bn8pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4nn8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:56Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.788788 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31db1d28-81c8-4eae-989c-49168bd4e711\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f0ff77c32912e767410211079dffc4e0681cf67e040e2cff227825813908d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://675507d9779ef73f4e25201102562a79a43654bce4a6e3b363f03e0e0b697578\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-15T05:37:33Z\\\",\\\"message\\\":\\\"le observer\\\\nW1215 05:37:33.642096 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1215 05:37:33.642525 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1215 05:37:33.645063 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1979444596/tls.crt::/tmp/serving-cert-1979444596/tls.key\\\\\\\"\\\\nI1215 05:37:33.801352 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1215 05:37:33.804097 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1215 05:37:33.804116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1215 05:37:33.804137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1215 05:37:33.804142 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1215 05:37:33.809816 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1215 05:37:33.809843 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1215 05:37:33.809857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1215 05:37:33.809860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1215 05:37:33.809862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1215 05:37:33.810091 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1215 05:37:33.812167 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:56Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.791549 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.791587 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.791601 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.791617 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.791627 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:56Z","lastTransitionTime":"2025-12-15T05:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.863687 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-82lhw_2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7/ovnkube-controller/1.log" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.865573 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" event={"ID":"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7","Type":"ContainerStarted","Data":"72337c4b670370cacb09ca1e4d1dc3de04c6106c45141c3a6fb1b89541dddc93"} Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.865760 4747 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.881064 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82d2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9b71b51-500c-4932-b19a-559ec3d15a5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4bc1257d5ff3f04dcdce005fadc221c444afadbe862174495f6191537a58970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjjhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e44712601c304d309e50b76c61dba25a4fd6d982f6dd1df36fb046b0473bb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjjhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-82d2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:56Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.892602 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02b1ad69-88a4-40e6-a7c4-409b8d42f0cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317d78781624437d29c71b7ca4fa83e80bd20b5f05ab3ed6d1f7177abf0b77d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe7a2e5d3edede6651bef4bed5fb61df5e777cb2da607c135c9e529ad6d58fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://253522075c9e770e558c77f98b826cbb502f402d3806f640dc78d752baf574c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23a7828ef5239d7d64be1ee51a07c50e713808269c5ba2c94fde9e3a2470713a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:56Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.893499 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.893549 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.893561 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.893577 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.893630 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:56Z","lastTransitionTime":"2025-12-15T05:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.908669 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:56Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.920382 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a223bb34540a7bc57028079a1b97da25865265d93931192b6be84a5625714aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:56Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.935920 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:56Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.951533 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:56Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.968637 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pc5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39f3b6ddce69ca768ed97f2305771f13fdcc0608fd58b91ee8f9c000e6786af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pc5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:56Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.977984 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d50e5c9-7ce9-40c0-b942-01031654d27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3d9fff925cadcb3478997fe799c7fde5e75f670ad1309f5c6b15c3c534b673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8e211d6177b24044383a0cc22cceb80f6442489a5d2b4adbafc3d36637b3a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nldtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:56Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.991220 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72337c4b670370cacb09ca1e4d1dc3de04c6106c45141c3a6fb1b89541dddc93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61cb963b01ba342f581af75f1785b8dd147c21a60f563dee8eff960476970090\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-15T05:37:43Z\\\",\\\"message\\\":\\\" 6167 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1215 05:37:43.533647 6167 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1215 05:37:43.533740 6167 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1215 05:37:43.533792 6167 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1215 05:37:43.533820 6167 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1215 05:37:43.533832 6167 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1215 05:37:43.533880 6167 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1215 05:37:43.533911 6167 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1215 05:37:43.533913 6167 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1215 05:37:43.533946 6167 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1215 05:37:43.533946 6167 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1215 05:37:43.533952 6167 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1215 05:37:43.533960 6167 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1215 05:37:43.533970 6167 handler.go:208] Removed *v1.Node event handler 2\\\\nI1215 05:37:43.534022 6167 factory.go:656] Stopping watch factory\\\\nI1215 05:37:43.534039 6167 ovnkube.go:599] Stopped ovnkube\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-82lhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:56Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.996207 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.996330 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.996421 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.996520 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:56 crc kubenswrapper[4747]: I1215 05:37:56.996596 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:56Z","lastTransitionTime":"2025-12-15T05:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.000104 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://582ccb05087bcbafc98ab0ee2114e9045b2ff301e10ba99cf8ddcd285cae9d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a10d569d887d2d91a5cb36b7193d4bfecf6a3177e238396b8863806aacf9df19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:56Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.006639 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cltgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3c83e90-bb8c-4909-9633-8f59ca12db6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5babf7321502d73aabf3a859529005a01ff938a5bdb4032d6eee6ace84be575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg2m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cltgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:57Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.013083 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p2w9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abdca76-2fcd-44fc-a09d-ded3084306d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e6c91da690038d3b65310ea18371acab81ebb6542f2072f4ac4e08f74f04de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fw7nq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p2w9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:57Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.021133 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmfps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89350c5d-9a77-499e-81ec-376b012cc219\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31dada17d2bfd9fb0e40e0e67d3d41379a6dabe9f1a7db2b2367a66d120d7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbpgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmfps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:57Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.028268 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4nn8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca0b2d2-cd19-409a-aa6d-df8b295adf62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bn8pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bn8pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4nn8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:57Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.037181 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31db1d28-81c8-4eae-989c-49168bd4e711\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f0ff77c32912e767410211079dffc4e0681cf67e040e2cff227825813908d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://675507d9779ef73f4e25201102562a79a43654bce4a6e3b363f03e0e0b697578\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-15T05:37:33Z\\\",\\\"message\\\":\\\"le observer\\\\nW1215 05:37:33.642096 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1215 05:37:33.642525 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1215 05:37:33.645063 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1979444596/tls.crt::/tmp/serving-cert-1979444596/tls.key\\\\\\\"\\\\nI1215 05:37:33.801352 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1215 05:37:33.804097 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1215 05:37:33.804116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1215 05:37:33.804137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1215 05:37:33.804142 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1215 05:37:33.809816 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1215 05:37:33.809843 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1215 05:37:33.809857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1215 05:37:33.809860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1215 05:37:33.809862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1215 05:37:33.810091 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1215 05:37:33.812167 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:57Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.046577 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8b29ca57cdee8b65a3f3a0536e36f0e7e4666850221e623ca0f8186fd18e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:57Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.098181 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.098204 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.098214 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.098230 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.098242 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:57Z","lastTransitionTime":"2025-12-15T05:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.200507 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.200541 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.200551 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.200562 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.200574 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:57Z","lastTransitionTime":"2025-12-15T05:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.302520 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.302578 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.302590 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.302611 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.302622 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:57Z","lastTransitionTime":"2025-12-15T05:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.393223 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.393267 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.393278 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.393297 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.393312 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:57Z","lastTransitionTime":"2025-12-15T05:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:57 crc kubenswrapper[4747]: E1215 05:37:57.402961 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4ef03cda-5cb6-4966-bf0f-23d213ae8ebc\\\",\\\"systemUUID\\\":\\\"f6e6c4c5-517c-43b9-abbe-241c399d7f32\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:57Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.406867 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.406901 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.406911 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.406943 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.406954 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:57Z","lastTransitionTime":"2025-12-15T05:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:57 crc kubenswrapper[4747]: E1215 05:37:57.416026 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4ef03cda-5cb6-4966-bf0f-23d213ae8ebc\\\",\\\"systemUUID\\\":\\\"f6e6c4c5-517c-43b9-abbe-241c399d7f32\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:57Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.418916 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.418995 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.419007 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.419026 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.419042 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:57Z","lastTransitionTime":"2025-12-15T05:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:57 crc kubenswrapper[4747]: E1215 05:37:57.428834 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4ef03cda-5cb6-4966-bf0f-23d213ae8ebc\\\",\\\"systemUUID\\\":\\\"f6e6c4c5-517c-43b9-abbe-241c399d7f32\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:57Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.431517 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.431553 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.431564 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.431581 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.431592 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:57Z","lastTransitionTime":"2025-12-15T05:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:57 crc kubenswrapper[4747]: E1215 05:37:57.440184 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4ef03cda-5cb6-4966-bf0f-23d213ae8ebc\\\",\\\"systemUUID\\\":\\\"f6e6c4c5-517c-43b9-abbe-241c399d7f32\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:57Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.447458 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.447499 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.447511 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.447528 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.447539 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:57Z","lastTransitionTime":"2025-12-15T05:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:57 crc kubenswrapper[4747]: E1215 05:37:57.456694 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4ef03cda-5cb6-4966-bf0f-23d213ae8ebc\\\",\\\"systemUUID\\\":\\\"f6e6c4c5-517c-43b9-abbe-241c399d7f32\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:57Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:57 crc kubenswrapper[4747]: E1215 05:37:57.456813 4747 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.458036 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.458066 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.458076 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.458086 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.458094 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:57Z","lastTransitionTime":"2025-12-15T05:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.559600 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.559634 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.559644 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.559656 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.559664 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:57Z","lastTransitionTime":"2025-12-15T05:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.628652 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:37:57 crc kubenswrapper[4747]: E1215 05:37:57.628835 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.661585 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.661619 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.661634 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.661649 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.661660 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:57Z","lastTransitionTime":"2025-12-15T05:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.763850 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.763918 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.763960 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.763981 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.763997 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:57Z","lastTransitionTime":"2025-12-15T05:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.866336 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.866368 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.866378 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.866392 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.866402 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:57Z","lastTransitionTime":"2025-12-15T05:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.870158 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-82lhw_2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7/ovnkube-controller/2.log" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.870711 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-82lhw_2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7/ovnkube-controller/1.log" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.874193 4747 generic.go:334] "Generic (PLEG): container finished" podID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerID="72337c4b670370cacb09ca1e4d1dc3de04c6106c45141c3a6fb1b89541dddc93" exitCode=1 Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.874247 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" event={"ID":"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7","Type":"ContainerDied","Data":"72337c4b670370cacb09ca1e4d1dc3de04c6106c45141c3a6fb1b89541dddc93"} Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.874306 4747 scope.go:117] "RemoveContainer" containerID="61cb963b01ba342f581af75f1785b8dd147c21a60f563dee8eff960476970090" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.875312 4747 scope.go:117] "RemoveContainer" containerID="72337c4b670370cacb09ca1e4d1dc3de04c6106c45141c3a6fb1b89541dddc93" Dec 15 05:37:57 crc kubenswrapper[4747]: E1215 05:37:57.875545 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-82lhw_openshift-ovn-kubernetes(2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.887089 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a223bb34540a7bc57028079a1b97da25865265d93931192b6be84a5625714aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:57Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.900043 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pc5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39f3b6ddce69ca768ed97f2305771f13fdcc0608fd58b91ee8f9c000e6786af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pc5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:57Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.908081 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d50e5c9-7ce9-40c0-b942-01031654d27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3d9fff925cadcb3478997fe799c7fde5e75f670ad1309f5c6b15c3c534b673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8e211d6177b24044383a0cc22cceb80f6442489a5d2b4adbafc3d36637b3a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nldtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:57Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.921681 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72337c4b670370cacb09ca1e4d1dc3de04c6106c45141c3a6fb1b89541dddc93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61cb963b01ba342f581af75f1785b8dd147c21a60f563dee8eff960476970090\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-15T05:37:43Z\\\",\\\"message\\\":\\\" 6167 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1215 05:37:43.533647 6167 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1215 05:37:43.533740 6167 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1215 05:37:43.533792 6167 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1215 05:37:43.533820 6167 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1215 05:37:43.533832 6167 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1215 05:37:43.533880 6167 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1215 05:37:43.533911 6167 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1215 05:37:43.533913 6167 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1215 05:37:43.533946 6167 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1215 05:37:43.533946 6167 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1215 05:37:43.533952 6167 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1215 05:37:43.533960 6167 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1215 05:37:43.533970 6167 handler.go:208] Removed *v1.Node event handler 2\\\\nI1215 05:37:43.534022 6167 factory.go:656] Stopping watch factory\\\\nI1215 05:37:43.534039 6167 ovnkube.go:599] Stopped ovnkube\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72337c4b670370cacb09ca1e4d1dc3de04c6106c45141c3a6fb1b89541dddc93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"message\\\":\\\"reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1215 05:37:57.248195 6397 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1215 05:37:57.248522 6397 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1215 05:37:57.254515 6397 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1215 05:37:57.254583 6397 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1215 05:37:57.254642 6397 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1215 05:37:57.254715 6397 handler.go:208] Removed *v1.Node event handler 2\\\\nI1215 05:37:57.254743 6397 factory.go:656] Stopping watch factory\\\\nI1215 05:37:57.286414 6397 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1215 05:37:57.286444 6397 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1215 05:37:57.286715 6397 ovnkube.go:599] Stopped ovnkube\\\\nI1215 05:37:57.286741 6397 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1215 05:37:57.286813 6397 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-82lhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:57Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.930894 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:57Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.939654 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:57Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.948151 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p2w9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abdca76-2fcd-44fc-a09d-ded3084306d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e6c91da690038d3b65310ea18371acab81ebb6542f2072f4ac4e08f74f04de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fw7nq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p2w9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:57Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.957310 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmfps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89350c5d-9a77-499e-81ec-376b012cc219\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31dada17d2bfd9fb0e40e0e67d3d41379a6dabe9f1a7db2b2367a66d120d7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbpgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmfps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:57Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.966243 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4nn8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca0b2d2-cd19-409a-aa6d-df8b295adf62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bn8pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bn8pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4nn8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:57Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.968541 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.968605 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.968619 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.968633 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.968646 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:57Z","lastTransitionTime":"2025-12-15T05:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.977298 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31db1d28-81c8-4eae-989c-49168bd4e711\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f0ff77c32912e767410211079dffc4e0681cf67e040e2cff227825813908d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://675507d9779ef73f4e25201102562a79a43654bce4a6e3b363f03e0e0b697578\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-15T05:37:33Z\\\",\\\"message\\\":\\\"le observer\\\\nW1215 05:37:33.642096 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1215 05:37:33.642525 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1215 05:37:33.645063 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1979444596/tls.crt::/tmp/serving-cert-1979444596/tls.key\\\\\\\"\\\\nI1215 05:37:33.801352 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1215 05:37:33.804097 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1215 05:37:33.804116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1215 05:37:33.804137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1215 05:37:33.804142 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1215 05:37:33.809816 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1215 05:37:33.809843 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1215 05:37:33.809857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1215 05:37:33.809860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1215 05:37:33.809862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1215 05:37:33.810091 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1215 05:37:33.812167 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:57Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.987079 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8b29ca57cdee8b65a3f3a0536e36f0e7e4666850221e623ca0f8186fd18e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:57Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:57 crc kubenswrapper[4747]: I1215 05:37:57.996199 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://582ccb05087bcbafc98ab0ee2114e9045b2ff301e10ba99cf8ddcd285cae9d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a10d569d887d2d91a5cb36b7193d4bfecf6a3177e238396b8863806aacf9df19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:57Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.003428 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cltgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3c83e90-bb8c-4909-9633-8f59ca12db6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5babf7321502d73aabf3a859529005a01ff938a5bdb4032d6eee6ace84be575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg2m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cltgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:58Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.012020 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02b1ad69-88a4-40e6-a7c4-409b8d42f0cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317d78781624437d29c71b7ca4fa83e80bd20b5f05ab3ed6d1f7177abf0b77d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe7a2e5d3edede6651bef4bed5fb61df5e777cb2da607c135c9e529ad6d58fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://253522075c9e770e558c77f98b826cbb502f402d3806f640dc78d752baf574c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23a7828ef5239d7d64be1ee51a07c50e713808269c5ba2c94fde9e3a2470713a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:58Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.021158 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:58Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.030384 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82d2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9b71b51-500c-4932-b19a-559ec3d15a5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4bc1257d5ff3f04dcdce005fadc221c444afadbe862174495f6191537a58970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjjhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e44712601c304d309e50b76c61dba25a4fd6d982f6dd1df36fb046b0473bb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjjhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-82d2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:37:58Z is after 2025-08-24T17:21:41Z" Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.071513 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.071612 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.071784 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.071976 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.072139 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:58Z","lastTransitionTime":"2025-12-15T05:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.173657 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.173685 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.173695 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.173707 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.173719 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:58Z","lastTransitionTime":"2025-12-15T05:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.275614 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.275668 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.275679 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.275692 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.275701 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:58Z","lastTransitionTime":"2025-12-15T05:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.377578 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.377654 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.377666 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.377684 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.377695 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:58Z","lastTransitionTime":"2025-12-15T05:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.479522 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.479561 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.479571 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.479582 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.479589 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:58Z","lastTransitionTime":"2025-12-15T05:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.581944 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.582189 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.582201 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.582214 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.582223 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:58Z","lastTransitionTime":"2025-12-15T05:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.629148 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.629196 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.629336 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:37:58 crc kubenswrapper[4747]: E1215 05:37:58.629308 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:37:58 crc kubenswrapper[4747]: E1215 05:37:58.629446 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:37:58 crc kubenswrapper[4747]: E1215 05:37:58.629587 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.683520 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.683555 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.683567 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.683582 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.683592 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:58Z","lastTransitionTime":"2025-12-15T05:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.786209 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.786280 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.786293 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.786308 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.786320 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:58Z","lastTransitionTime":"2025-12-15T05:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.878391 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-82lhw_2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7/ovnkube-controller/2.log" Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.888102 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.888135 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.888143 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.888156 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.888166 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:58Z","lastTransitionTime":"2025-12-15T05:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.989830 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.989868 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.989880 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.989891 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:58 crc kubenswrapper[4747]: I1215 05:37:58.989904 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:58Z","lastTransitionTime":"2025-12-15T05:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:59 crc kubenswrapper[4747]: I1215 05:37:59.091758 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:59 crc kubenswrapper[4747]: I1215 05:37:59.091813 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:59 crc kubenswrapper[4747]: I1215 05:37:59.091825 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:59 crc kubenswrapper[4747]: I1215 05:37:59.091840 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:59 crc kubenswrapper[4747]: I1215 05:37:59.091850 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:59Z","lastTransitionTime":"2025-12-15T05:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:59 crc kubenswrapper[4747]: I1215 05:37:59.193283 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:59 crc kubenswrapper[4747]: I1215 05:37:59.193342 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:59 crc kubenswrapper[4747]: I1215 05:37:59.193357 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:59 crc kubenswrapper[4747]: I1215 05:37:59.193371 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:59 crc kubenswrapper[4747]: I1215 05:37:59.193380 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:59Z","lastTransitionTime":"2025-12-15T05:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:59 crc kubenswrapper[4747]: I1215 05:37:59.295579 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:59 crc kubenswrapper[4747]: I1215 05:37:59.295619 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:59 crc kubenswrapper[4747]: I1215 05:37:59.295630 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:59 crc kubenswrapper[4747]: I1215 05:37:59.295643 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:59 crc kubenswrapper[4747]: I1215 05:37:59.295654 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:59Z","lastTransitionTime":"2025-12-15T05:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:59 crc kubenswrapper[4747]: I1215 05:37:59.398050 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:59 crc kubenswrapper[4747]: I1215 05:37:59.398085 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:59 crc kubenswrapper[4747]: I1215 05:37:59.398097 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:59 crc kubenswrapper[4747]: I1215 05:37:59.398110 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:59 crc kubenswrapper[4747]: I1215 05:37:59.398122 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:59Z","lastTransitionTime":"2025-12-15T05:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:59 crc kubenswrapper[4747]: I1215 05:37:59.500783 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:59 crc kubenswrapper[4747]: I1215 05:37:59.500848 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:59 crc kubenswrapper[4747]: I1215 05:37:59.500860 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:59 crc kubenswrapper[4747]: I1215 05:37:59.500875 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:59 crc kubenswrapper[4747]: I1215 05:37:59.500887 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:59Z","lastTransitionTime":"2025-12-15T05:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:59 crc kubenswrapper[4747]: I1215 05:37:59.602276 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:59 crc kubenswrapper[4747]: I1215 05:37:59.602309 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:59 crc kubenswrapper[4747]: I1215 05:37:59.602324 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:59 crc kubenswrapper[4747]: I1215 05:37:59.602338 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:59 crc kubenswrapper[4747]: I1215 05:37:59.602349 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:59Z","lastTransitionTime":"2025-12-15T05:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:59 crc kubenswrapper[4747]: I1215 05:37:59.628334 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:37:59 crc kubenswrapper[4747]: E1215 05:37:59.628447 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:37:59 crc kubenswrapper[4747]: I1215 05:37:59.704864 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:59 crc kubenswrapper[4747]: I1215 05:37:59.704901 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:59 crc kubenswrapper[4747]: I1215 05:37:59.704915 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:59 crc kubenswrapper[4747]: I1215 05:37:59.704946 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:59 crc kubenswrapper[4747]: I1215 05:37:59.704958 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:59Z","lastTransitionTime":"2025-12-15T05:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:59 crc kubenswrapper[4747]: I1215 05:37:59.807112 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:59 crc kubenswrapper[4747]: I1215 05:37:59.807159 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:59 crc kubenswrapper[4747]: I1215 05:37:59.807169 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:59 crc kubenswrapper[4747]: I1215 05:37:59.807188 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:59 crc kubenswrapper[4747]: I1215 05:37:59.807203 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:59Z","lastTransitionTime":"2025-12-15T05:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:37:59 crc kubenswrapper[4747]: I1215 05:37:59.908652 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:37:59 crc kubenswrapper[4747]: I1215 05:37:59.908681 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:37:59 crc kubenswrapper[4747]: I1215 05:37:59.908691 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:37:59 crc kubenswrapper[4747]: I1215 05:37:59.908704 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:37:59 crc kubenswrapper[4747]: I1215 05:37:59.908714 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:37:59Z","lastTransitionTime":"2025-12-15T05:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.010576 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.010600 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.010611 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.010623 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.010634 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:00Z","lastTransitionTime":"2025-12-15T05:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.112605 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.112642 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.112652 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.112665 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.112676 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:00Z","lastTransitionTime":"2025-12-15T05:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.214352 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.214893 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.215058 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.215198 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.215272 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:00Z","lastTransitionTime":"2025-12-15T05:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.316670 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.316731 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.316744 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.316761 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.316772 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:00Z","lastTransitionTime":"2025-12-15T05:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.419154 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.419194 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.419209 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.419222 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.419232 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:00Z","lastTransitionTime":"2025-12-15T05:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.520979 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.521036 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.521049 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.521063 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.521077 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:00Z","lastTransitionTime":"2025-12-15T05:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.623524 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.623553 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.623563 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.623577 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.623589 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:00Z","lastTransitionTime":"2025-12-15T05:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.629052 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.629068 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:38:00 crc kubenswrapper[4747]: E1215 05:38:00.629247 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.629069 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:38:00 crc kubenswrapper[4747]: E1215 05:38:00.629415 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:38:00 crc kubenswrapper[4747]: E1215 05:38:00.629502 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.725677 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.725721 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.725733 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.725753 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.725764 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:00Z","lastTransitionTime":"2025-12-15T05:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.827484 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.827514 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.827523 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.827536 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.827547 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:00Z","lastTransitionTime":"2025-12-15T05:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.929480 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.929592 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.929664 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.929726 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:00 crc kubenswrapper[4747]: I1215 05:38:00.929789 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:00Z","lastTransitionTime":"2025-12-15T05:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.032231 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.032265 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.032277 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.032292 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.032303 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:01Z","lastTransitionTime":"2025-12-15T05:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.134504 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.134554 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.134567 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.134586 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.134600 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:01Z","lastTransitionTime":"2025-12-15T05:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.236255 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.236287 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.236299 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.236312 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.236321 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:01Z","lastTransitionTime":"2025-12-15T05:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.338491 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.338643 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.338710 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.338818 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.338901 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:01Z","lastTransitionTime":"2025-12-15T05:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.441703 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.441753 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.441765 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.441788 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.441809 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:01Z","lastTransitionTime":"2025-12-15T05:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.544060 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.544104 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.544113 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.544128 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.544139 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:01Z","lastTransitionTime":"2025-12-15T05:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.628475 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:38:01 crc kubenswrapper[4747]: E1215 05:38:01.628597 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.646039 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.646076 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.646086 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.646098 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.646109 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:01Z","lastTransitionTime":"2025-12-15T05:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.747960 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.747994 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.748006 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.748018 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.748028 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:01Z","lastTransitionTime":"2025-12-15T05:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.849958 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.850022 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.850041 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.850064 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.850079 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:01Z","lastTransitionTime":"2025-12-15T05:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.951964 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.952001 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.952011 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.952022 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:01 crc kubenswrapper[4747]: I1215 05:38:01.952034 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:01Z","lastTransitionTime":"2025-12-15T05:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.054427 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.054469 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.054479 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.054498 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.054509 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:02Z","lastTransitionTime":"2025-12-15T05:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.155871 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.156063 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.156150 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.156216 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.156272 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:02Z","lastTransitionTime":"2025-12-15T05:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.257745 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.257775 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.257783 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.257812 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.257821 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:02Z","lastTransitionTime":"2025-12-15T05:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.359840 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.360078 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.360146 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.360216 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.360279 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:02Z","lastTransitionTime":"2025-12-15T05:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.462385 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.462539 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.462612 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.462677 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.462730 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:02Z","lastTransitionTime":"2025-12-15T05:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.565007 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.565039 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.565060 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.565072 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.565083 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:02Z","lastTransitionTime":"2025-12-15T05:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.628790 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:38:02 crc kubenswrapper[4747]: E1215 05:38:02.628963 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.629061 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:38:02 crc kubenswrapper[4747]: E1215 05:38:02.629145 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.629328 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:38:02 crc kubenswrapper[4747]: E1215 05:38:02.629585 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.667356 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.667406 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.667419 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.667435 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.667450 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:02Z","lastTransitionTime":"2025-12-15T05:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.769443 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.769485 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.769495 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.769512 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.769523 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:02Z","lastTransitionTime":"2025-12-15T05:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.870947 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.870982 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.870992 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.871006 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.871019 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:02Z","lastTransitionTime":"2025-12-15T05:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.973488 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.973521 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.973531 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.973544 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:02 crc kubenswrapper[4747]: I1215 05:38:02.973552 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:02Z","lastTransitionTime":"2025-12-15T05:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.075088 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.075116 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.075126 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.075136 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.075145 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:03Z","lastTransitionTime":"2025-12-15T05:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.177233 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.177285 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.177294 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.177308 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.177337 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:03Z","lastTransitionTime":"2025-12-15T05:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.279417 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.279475 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.279486 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.279499 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.279511 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:03Z","lastTransitionTime":"2025-12-15T05:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.381448 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.381505 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.381515 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.381531 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.381541 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:03Z","lastTransitionTime":"2025-12-15T05:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.483286 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.483323 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.483332 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.483364 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.483373 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:03Z","lastTransitionTime":"2025-12-15T05:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.585182 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.585229 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.585241 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.585258 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.585271 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:03Z","lastTransitionTime":"2025-12-15T05:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.622182 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fca0b2d2-cd19-409a-aa6d-df8b295adf62-metrics-certs\") pod \"network-metrics-daemon-4nn8g\" (UID: \"fca0b2d2-cd19-409a-aa6d-df8b295adf62\") " pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:38:03 crc kubenswrapper[4747]: E1215 05:38:03.622309 4747 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 15 05:38:03 crc kubenswrapper[4747]: E1215 05:38:03.622367 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fca0b2d2-cd19-409a-aa6d-df8b295adf62-metrics-certs podName:fca0b2d2-cd19-409a-aa6d-df8b295adf62 nodeName:}" failed. No retries permitted until 2025-12-15 05:38:19.622351276 +0000 UTC m=+63.318863193 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fca0b2d2-cd19-409a-aa6d-df8b295adf62-metrics-certs") pod "network-metrics-daemon-4nn8g" (UID: "fca0b2d2-cd19-409a-aa6d-df8b295adf62") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.628465 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:38:03 crc kubenswrapper[4747]: E1215 05:38:03.628572 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.687242 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.687297 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.687306 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.687326 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.687339 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:03Z","lastTransitionTime":"2025-12-15T05:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.790222 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.790263 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.790275 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.790287 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.790297 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:03Z","lastTransitionTime":"2025-12-15T05:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.891964 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.892004 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.892015 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.892031 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.892042 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:03Z","lastTransitionTime":"2025-12-15T05:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.993824 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.993884 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.993894 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.993911 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:03 crc kubenswrapper[4747]: I1215 05:38:03.993941 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:03Z","lastTransitionTime":"2025-12-15T05:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.096152 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.096195 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.096207 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.096225 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.096237 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:04Z","lastTransitionTime":"2025-12-15T05:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.198400 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.198446 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.198456 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.198469 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.198477 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:04Z","lastTransitionTime":"2025-12-15T05:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.200590 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.201228 4747 scope.go:117] "RemoveContainer" containerID="72337c4b670370cacb09ca1e4d1dc3de04c6106c45141c3a6fb1b89541dddc93" Dec 15 05:38:04 crc kubenswrapper[4747]: E1215 05:38:04.201365 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-82lhw_openshift-ovn-kubernetes(2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.211538 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d50e5c9-7ce9-40c0-b942-01031654d27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3d9fff925cadcb3478997fe799c7fde5e75f670ad1309f5c6b15c3c534b673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8e211d6177b24044383a0cc22cceb80f6442489a5d2b4adbafc3d36637b3a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nldtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:04Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.224685 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72337c4b670370cacb09ca1e4d1dc3de04c6106c45141c3a6fb1b89541dddc93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72337c4b670370cacb09ca1e4d1dc3de04c6106c45141c3a6fb1b89541dddc93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"message\\\":\\\"reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1215 05:37:57.248195 6397 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1215 05:37:57.248522 6397 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1215 05:37:57.254515 6397 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1215 05:37:57.254583 6397 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1215 05:37:57.254642 6397 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1215 05:37:57.254715 6397 handler.go:208] Removed *v1.Node event handler 2\\\\nI1215 05:37:57.254743 6397 factory.go:656] Stopping watch factory\\\\nI1215 05:37:57.286414 6397 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1215 05:37:57.286444 6397 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1215 05:37:57.286715 6397 ovnkube.go:599] Stopped ovnkube\\\\nI1215 05:37:57.286741 6397 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1215 05:37:57.286813 6397 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-82lhw_openshift-ovn-kubernetes(2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-82lhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:04Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.233190 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:04Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.241964 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:04Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.251853 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pc5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39f3b6ddce69ca768ed97f2305771f13fdcc0608fd58b91ee8f9c000e6786af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pc5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:04Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.261259 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmfps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89350c5d-9a77-499e-81ec-376b012cc219\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31dada17d2bfd9fb0e40e0e67d3d41379a6dabe9f1a7db2b2367a66d120d7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbpgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmfps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:04Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.268441 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4nn8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca0b2d2-cd19-409a-aa6d-df8b295adf62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bn8pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bn8pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4nn8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:04Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.277385 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31db1d28-81c8-4eae-989c-49168bd4e711\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f0ff77c32912e767410211079dffc4e0681cf67e040e2cff227825813908d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://675507d9779ef73f4e25201102562a79a43654bce4a6e3b363f03e0e0b697578\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-15T05:37:33Z\\\",\\\"message\\\":\\\"le observer\\\\nW1215 05:37:33.642096 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1215 05:37:33.642525 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1215 05:37:33.645063 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1979444596/tls.crt::/tmp/serving-cert-1979444596/tls.key\\\\\\\"\\\\nI1215 05:37:33.801352 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1215 05:37:33.804097 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1215 05:37:33.804116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1215 05:37:33.804137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1215 05:37:33.804142 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1215 05:37:33.809816 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1215 05:37:33.809843 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1215 05:37:33.809857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1215 05:37:33.809860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1215 05:37:33.809862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1215 05:37:33.810091 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1215 05:37:33.812167 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:04Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.287706 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8b29ca57cdee8b65a3f3a0536e36f0e7e4666850221e623ca0f8186fd18e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:04Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.296658 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://582ccb05087bcbafc98ab0ee2114e9045b2ff301e10ba99cf8ddcd285cae9d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a10d569d887d2d91a5cb36b7193d4bfecf6a3177e238396b8863806aacf9df19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:04Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.300436 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.300469 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.300481 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.300498 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.300509 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:04Z","lastTransitionTime":"2025-12-15T05:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.304192 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cltgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3c83e90-bb8c-4909-9633-8f59ca12db6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5babf7321502d73aabf3a859529005a01ff938a5bdb4032d6eee6ace84be575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg2m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cltgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:04Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.311259 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p2w9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abdca76-2fcd-44fc-a09d-ded3084306d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e6c91da690038d3b65310ea18371acab81ebb6542f2072f4ac4e08f74f04de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fw7nq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p2w9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:04Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.320560 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02b1ad69-88a4-40e6-a7c4-409b8d42f0cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317d78781624437d29c71b7ca4fa83e80bd20b5f05ab3ed6d1f7177abf0b77d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe7a2e5d3edede6651bef4bed5fb61df5e777cb2da607c135c9e529ad6d58fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://253522075c9e770e558c77f98b826cbb502f402d3806f640dc78d752baf574c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23a7828ef5239d7d64be1ee51a07c50e713808269c5ba2c94fde9e3a2470713a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:04Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.328691 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:04Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.337452 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82d2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9b71b51-500c-4932-b19a-559ec3d15a5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4bc1257d5ff3f04dcdce005fadc221c444afadbe862174495f6191537a58970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjjhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e44712601c304d309e50b76c61dba25a4fd6d982f6dd1df36fb046b0473bb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjjhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-82d2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:04Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.345572 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a223bb34540a7bc57028079a1b97da25865265d93931192b6be84a5625714aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:04Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.402152 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.402178 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.402190 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.402202 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.402212 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:04Z","lastTransitionTime":"2025-12-15T05:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.504405 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.504438 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.504447 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.504459 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.504468 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:04Z","lastTransitionTime":"2025-12-15T05:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.605993 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.606026 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.606035 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.606048 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.606059 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:04Z","lastTransitionTime":"2025-12-15T05:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.629696 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:38:04 crc kubenswrapper[4747]: E1215 05:38:04.629881 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.629951 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.630048 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:38:04 crc kubenswrapper[4747]: E1215 05:38:04.630291 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:38:04 crc kubenswrapper[4747]: E1215 05:38:04.630454 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.707361 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.707421 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.707432 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.707469 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.707480 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:04Z","lastTransitionTime":"2025-12-15T05:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.809883 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.809943 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.809961 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.809975 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.809985 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:04Z","lastTransitionTime":"2025-12-15T05:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.912192 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.912282 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.912326 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.912342 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:04 crc kubenswrapper[4747]: I1215 05:38:04.912353 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:04Z","lastTransitionTime":"2025-12-15T05:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.014243 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.014268 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.014277 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.014288 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.014296 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:05Z","lastTransitionTime":"2025-12-15T05:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.115986 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.116037 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.116051 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.116068 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.116079 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:05Z","lastTransitionTime":"2025-12-15T05:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.217598 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.217661 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.217670 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.217683 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.217693 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:05Z","lastTransitionTime":"2025-12-15T05:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.319364 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.319419 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.319432 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.319455 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.319468 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:05Z","lastTransitionTime":"2025-12-15T05:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.421042 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.421089 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.421102 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.421117 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.421129 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:05Z","lastTransitionTime":"2025-12-15T05:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.523333 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.523392 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.523404 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.523438 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.523450 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:05Z","lastTransitionTime":"2025-12-15T05:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.625909 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.626055 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.626162 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.626254 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.626455 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:05Z","lastTransitionTime":"2025-12-15T05:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.628263 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:38:05 crc kubenswrapper[4747]: E1215 05:38:05.628401 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.728470 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.728526 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.728538 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.728560 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.728573 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:05Z","lastTransitionTime":"2025-12-15T05:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.831273 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.831320 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.831331 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.831353 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.831365 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:05Z","lastTransitionTime":"2025-12-15T05:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.933880 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.933917 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.933947 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.933962 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:05 crc kubenswrapper[4747]: I1215 05:38:05.933970 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:05Z","lastTransitionTime":"2025-12-15T05:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.035460 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.035543 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.035576 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.035593 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.035606 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:06Z","lastTransitionTime":"2025-12-15T05:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.136971 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.137032 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.137042 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.137056 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.137065 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:06Z","lastTransitionTime":"2025-12-15T05:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.239444 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.239497 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.239508 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.239533 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.239548 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:06Z","lastTransitionTime":"2025-12-15T05:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.264538 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.275940 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.276562 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02b1ad69-88a4-40e6-a7c4-409b8d42f0cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317d78781624437d29c71b7ca4fa83e80bd20b5f05ab3ed6d1f7177abf0b77d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe7a2e5d3edede6651bef4bed5fb61df5e777cb2da607c135c9e529ad6d58fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://253522075c9e770e558c77f98b826cbb502f402d3806f640dc78d752baf574c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23a7828ef5239d7d64be1ee51a07c50e713808269c5ba2c94fde9e3a2470713a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:06Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.286341 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:06Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.294229 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82d2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9b71b51-500c-4932-b19a-559ec3d15a5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4bc1257d5ff3f04dcdce005fadc221c444afadbe862174495f6191537a58970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjjhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e44712601c304d309e50b76c61dba25a4fd6d982f6dd1df36fb046b0473bb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjjhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-82d2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:06Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.302075 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a223bb34540a7bc57028079a1b97da25865265d93931192b6be84a5625714aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:06Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.310340 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:06Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.318880 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:06Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.327645 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pc5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39f3b6ddce69ca768ed97f2305771f13fdcc0608fd58b91ee8f9c000e6786af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pc5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:06Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.334918 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d50e5c9-7ce9-40c0-b942-01031654d27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3d9fff925cadcb3478997fe799c7fde5e75f670ad1309f5c6b15c3c534b673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8e211d6177b24044383a0cc22cceb80f6442489a5d2b4adbafc3d36637b3a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nldtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:06Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.342184 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.342212 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.342220 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.342233 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.342246 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:06Z","lastTransitionTime":"2025-12-15T05:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.346640 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72337c4b670370cacb09ca1e4d1dc3de04c6106c45141c3a6fb1b89541dddc93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72337c4b670370cacb09ca1e4d1dc3de04c6106c45141c3a6fb1b89541dddc93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"message\\\":\\\"reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1215 05:37:57.248195 6397 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1215 05:37:57.248522 6397 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1215 05:37:57.254515 6397 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1215 05:37:57.254583 6397 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1215 05:37:57.254642 6397 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1215 05:37:57.254715 6397 handler.go:208] Removed *v1.Node event handler 2\\\\nI1215 05:37:57.254743 6397 factory.go:656] Stopping watch factory\\\\nI1215 05:37:57.286414 6397 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1215 05:37:57.286444 6397 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1215 05:37:57.286715 6397 ovnkube.go:599] Stopped ovnkube\\\\nI1215 05:37:57.286741 6397 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1215 05:37:57.286813 6397 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-82lhw_openshift-ovn-kubernetes(2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-82lhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:06Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.357620 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31db1d28-81c8-4eae-989c-49168bd4e711\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f0ff77c32912e767410211079dffc4e0681cf67e040e2cff227825813908d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://675507d9779ef73f4e25201102562a79a43654bce4a6e3b363f03e0e0b697578\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-15T05:37:33Z\\\",\\\"message\\\":\\\"le observer\\\\nW1215 05:37:33.642096 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1215 05:37:33.642525 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1215 05:37:33.645063 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1979444596/tls.crt::/tmp/serving-cert-1979444596/tls.key\\\\\\\"\\\\nI1215 05:37:33.801352 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1215 05:37:33.804097 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1215 05:37:33.804116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1215 05:37:33.804137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1215 05:37:33.804142 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1215 05:37:33.809816 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1215 05:37:33.809843 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1215 05:37:33.809857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1215 05:37:33.809860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1215 05:37:33.809862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1215 05:37:33.810091 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1215 05:37:33.812167 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:06Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.367175 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8b29ca57cdee8b65a3f3a0536e36f0e7e4666850221e623ca0f8186fd18e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:06Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.376055 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://582ccb05087bcbafc98ab0ee2114e9045b2ff301e10ba99cf8ddcd285cae9d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a10d569d887d2d91a5cb36b7193d4bfecf6a3177e238396b8863806aacf9df19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:06Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.383884 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cltgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3c83e90-bb8c-4909-9633-8f59ca12db6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5babf7321502d73aabf3a859529005a01ff938a5bdb4032d6eee6ace84be575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg2m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cltgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:06Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.391565 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p2w9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abdca76-2fcd-44fc-a09d-ded3084306d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e6c91da690038d3b65310ea18371acab81ebb6542f2072f4ac4e08f74f04de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fw7nq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p2w9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:06Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.401574 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmfps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89350c5d-9a77-499e-81ec-376b012cc219\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31dada17d2bfd9fb0e40e0e67d3d41379a6dabe9f1a7db2b2367a66d120d7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbpgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmfps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:06Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.409820 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4nn8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca0b2d2-cd19-409a-aa6d-df8b295adf62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bn8pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bn8pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4nn8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:06Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.444879 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.444918 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.444946 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.444964 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.444977 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:06Z","lastTransitionTime":"2025-12-15T05:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.447444 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 15 05:38:06 crc kubenswrapper[4747]: E1215 05:38:06.447679 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-15 05:38:38.447659883 +0000 UTC m=+82.144171799 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.547145 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.547192 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.547202 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.547219 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.547230 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:06Z","lastTransitionTime":"2025-12-15T05:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.548505 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.548549 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.548581 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.548623 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:38:06 crc kubenswrapper[4747]: E1215 05:38:06.548654 4747 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 15 05:38:06 crc kubenswrapper[4747]: E1215 05:38:06.548677 4747 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 15 05:38:06 crc kubenswrapper[4747]: E1215 05:38:06.548708 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-15 05:38:38.548691174 +0000 UTC m=+82.245203090 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 15 05:38:06 crc kubenswrapper[4747]: E1215 05:38:06.548726 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 15 05:38:06 crc kubenswrapper[4747]: E1215 05:38:06.548740 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-15 05:38:38.548723926 +0000 UTC m=+82.245235843 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 15 05:38:06 crc kubenswrapper[4747]: E1215 05:38:06.548751 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 15 05:38:06 crc kubenswrapper[4747]: E1215 05:38:06.548762 4747 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 15 05:38:06 crc kubenswrapper[4747]: E1215 05:38:06.548781 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 15 05:38:06 crc kubenswrapper[4747]: E1215 05:38:06.548806 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-15 05:38:38.548788978 +0000 UTC m=+82.245300895 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 15 05:38:06 crc kubenswrapper[4747]: E1215 05:38:06.548807 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 15 05:38:06 crc kubenswrapper[4747]: E1215 05:38:06.548826 4747 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 15 05:38:06 crc kubenswrapper[4747]: E1215 05:38:06.548850 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-15 05:38:38.548844673 +0000 UTC m=+82.245356590 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.628206 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.628265 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:38:06 crc kubenswrapper[4747]: E1215 05:38:06.628323 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.628375 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:38:06 crc kubenswrapper[4747]: E1215 05:38:06.628508 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:38:06 crc kubenswrapper[4747]: E1215 05:38:06.628559 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.638678 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02b1ad69-88a4-40e6-a7c4-409b8d42f0cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317d78781624437d29c71b7ca4fa83e80bd20b5f05ab3ed6d1f7177abf0b77d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe7a2e5d3edede6651bef4bed5fb61df5e777cb2da607c135c9e529ad6d58fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://253522075c9e770e558c77f98b826cbb502f402d3806f640dc78d752baf574c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23a7828ef5239d7d64be1ee51a07c50e713808269c5ba2c94fde9e3a2470713a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:06Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.647499 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:06Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.648495 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.648523 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.648534 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.648547 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.648561 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:06Z","lastTransitionTime":"2025-12-15T05:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.656715 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82d2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9b71b51-500c-4932-b19a-559ec3d15a5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4bc1257d5ff3f04dcdce005fadc221c444afadbe862174495f6191537a58970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjjhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e44712601c304d309e50b76c61dba25a4fd6d982f6dd1df36fb046b0473bb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjjhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-82d2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:06Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.665052 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a223bb34540a7bc57028079a1b97da25865265d93931192b6be84a5625714aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:06Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.673642 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7ba935d-4d45-497f-a710-482288987eb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d682a9462fba61e03c438d541888778564c5f9614b20ae3415d06039a1b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://918387852c8b6a10cbef90523b68f21472cb57394fe3107fb6a96ac8e76ada07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ff0666091801d67feef4ab5998d6a9c037afa1781db60c2f67046f3ec99a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bc359099b8d59b6ef6bdf25fa9b5b30e446e9897cfb56b16408dab6a88f8c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bc359099b8d59b6ef6bdf25fa9b5b30e446e9897cfb56b16408dab6a88f8c54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:06Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.683582 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:06Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.692439 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:06Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.710383 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pc5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39f3b6ddce69ca768ed97f2305771f13fdcc0608fd58b91ee8f9c000e6786af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pc5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:06Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.718707 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d50e5c9-7ce9-40c0-b942-01031654d27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3d9fff925cadcb3478997fe799c7fde5e75f670ad1309f5c6b15c3c534b673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8e211d6177b24044383a0cc22cceb80f6442489a5d2b4adbafc3d36637b3a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nldtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:06Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.731133 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72337c4b670370cacb09ca1e4d1dc3de04c6106c45141c3a6fb1b89541dddc93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72337c4b670370cacb09ca1e4d1dc3de04c6106c45141c3a6fb1b89541dddc93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"message\\\":\\\"reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1215 05:37:57.248195 6397 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1215 05:37:57.248522 6397 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1215 05:37:57.254515 6397 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1215 05:37:57.254583 6397 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1215 05:37:57.254642 6397 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1215 05:37:57.254715 6397 handler.go:208] Removed *v1.Node event handler 2\\\\nI1215 05:37:57.254743 6397 factory.go:656] Stopping watch factory\\\\nI1215 05:37:57.286414 6397 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1215 05:37:57.286444 6397 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1215 05:37:57.286715 6397 ovnkube.go:599] Stopped ovnkube\\\\nI1215 05:37:57.286741 6397 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1215 05:37:57.286813 6397 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-82lhw_openshift-ovn-kubernetes(2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-82lhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:06Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.753454 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.753475 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.753483 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.753495 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.753505 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:06Z","lastTransitionTime":"2025-12-15T05:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.759092 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31db1d28-81c8-4eae-989c-49168bd4e711\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f0ff77c32912e767410211079dffc4e0681cf67e040e2cff227825813908d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://675507d9779ef73f4e25201102562a79a43654bce4a6e3b363f03e0e0b697578\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-15T05:37:33Z\\\",\\\"message\\\":\\\"le observer\\\\nW1215 05:37:33.642096 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1215 05:37:33.642525 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1215 05:37:33.645063 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1979444596/tls.crt::/tmp/serving-cert-1979444596/tls.key\\\\\\\"\\\\nI1215 05:37:33.801352 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1215 05:37:33.804097 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1215 05:37:33.804116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1215 05:37:33.804137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1215 05:37:33.804142 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1215 05:37:33.809816 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1215 05:37:33.809843 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1215 05:37:33.809857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1215 05:37:33.809860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1215 05:37:33.809862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1215 05:37:33.810091 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1215 05:37:33.812167 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:06Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.781363 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8b29ca57cdee8b65a3f3a0536e36f0e7e4666850221e623ca0f8186fd18e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:06Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.795487 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://582ccb05087bcbafc98ab0ee2114e9045b2ff301e10ba99cf8ddcd285cae9d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a10d569d887d2d91a5cb36b7193d4bfecf6a3177e238396b8863806aacf9df19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:06Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.802895 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cltgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3c83e90-bb8c-4909-9633-8f59ca12db6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5babf7321502d73aabf3a859529005a01ff938a5bdb4032d6eee6ace84be575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg2m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cltgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:06Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.810529 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p2w9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abdca76-2fcd-44fc-a09d-ded3084306d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e6c91da690038d3b65310ea18371acab81ebb6542f2072f4ac4e08f74f04de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fw7nq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p2w9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:06Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.819400 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmfps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89350c5d-9a77-499e-81ec-376b012cc219\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31dada17d2bfd9fb0e40e0e67d3d41379a6dabe9f1a7db2b2367a66d120d7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbpgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmfps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:06Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.827677 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4nn8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca0b2d2-cd19-409a-aa6d-df8b295adf62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bn8pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bn8pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4nn8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:06Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.855580 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.855610 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.855621 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.855636 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.855646 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:06Z","lastTransitionTime":"2025-12-15T05:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.962303 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.962424 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.962504 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.962572 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:06 crc kubenswrapper[4747]: I1215 05:38:06.962665 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:06Z","lastTransitionTime":"2025-12-15T05:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.064679 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.064835 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.064900 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.064987 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.065043 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:07Z","lastTransitionTime":"2025-12-15T05:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.167346 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.167447 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.167515 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.167578 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.167627 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:07Z","lastTransitionTime":"2025-12-15T05:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.269511 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.269558 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.269569 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.269591 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.269603 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:07Z","lastTransitionTime":"2025-12-15T05:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.371663 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.371717 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.371730 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.371751 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.371766 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:07Z","lastTransitionTime":"2025-12-15T05:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.473743 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.473783 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.473793 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.473817 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.473828 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:07Z","lastTransitionTime":"2025-12-15T05:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.575585 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.575622 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.575633 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.575649 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.575663 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:07Z","lastTransitionTime":"2025-12-15T05:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.629152 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:38:07 crc kubenswrapper[4747]: E1215 05:38:07.629292 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.677404 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.677435 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.677446 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.677467 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.677475 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:07Z","lastTransitionTime":"2025-12-15T05:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.779737 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.779766 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.779778 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.779795 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.779821 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:07Z","lastTransitionTime":"2025-12-15T05:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.815040 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.815075 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.815084 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.815096 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.815105 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:07Z","lastTransitionTime":"2025-12-15T05:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:07 crc kubenswrapper[4747]: E1215 05:38:07.826599 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4ef03cda-5cb6-4966-bf0f-23d213ae8ebc\\\",\\\"systemUUID\\\":\\\"f6e6c4c5-517c-43b9-abbe-241c399d7f32\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:07Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.829879 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.829902 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.829912 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.829943 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.829953 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:07Z","lastTransitionTime":"2025-12-15T05:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:07 crc kubenswrapper[4747]: E1215 05:38:07.839737 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4ef03cda-5cb6-4966-bf0f-23d213ae8ebc\\\",\\\"systemUUID\\\":\\\"f6e6c4c5-517c-43b9-abbe-241c399d7f32\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:07Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.842709 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.842739 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.842749 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.842765 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.842774 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:07Z","lastTransitionTime":"2025-12-15T05:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:07 crc kubenswrapper[4747]: E1215 05:38:07.853740 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4ef03cda-5cb6-4966-bf0f-23d213ae8ebc\\\",\\\"systemUUID\\\":\\\"f6e6c4c5-517c-43b9-abbe-241c399d7f32\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:07Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.856833 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.856981 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.857053 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.857126 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.857190 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:07Z","lastTransitionTime":"2025-12-15T05:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:07 crc kubenswrapper[4747]: E1215 05:38:07.866422 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4ef03cda-5cb6-4966-bf0f-23d213ae8ebc\\\",\\\"systemUUID\\\":\\\"f6e6c4c5-517c-43b9-abbe-241c399d7f32\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:07Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.869117 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.869217 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.869273 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.869325 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.869394 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:07Z","lastTransitionTime":"2025-12-15T05:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:07 crc kubenswrapper[4747]: E1215 05:38:07.878770 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4ef03cda-5cb6-4966-bf0f-23d213ae8ebc\\\",\\\"systemUUID\\\":\\\"f6e6c4c5-517c-43b9-abbe-241c399d7f32\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:07Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:07 crc kubenswrapper[4747]: E1215 05:38:07.878885 4747 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.881744 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.881767 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.881777 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.881790 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.881808 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:07Z","lastTransitionTime":"2025-12-15T05:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.983549 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.983575 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.983584 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.983595 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:07 crc kubenswrapper[4747]: I1215 05:38:07.983608 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:07Z","lastTransitionTime":"2025-12-15T05:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:08 crc kubenswrapper[4747]: I1215 05:38:08.085758 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:08 crc kubenswrapper[4747]: I1215 05:38:08.085819 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:08 crc kubenswrapper[4747]: I1215 05:38:08.085833 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:08 crc kubenswrapper[4747]: I1215 05:38:08.085858 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:08 crc kubenswrapper[4747]: I1215 05:38:08.085871 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:08Z","lastTransitionTime":"2025-12-15T05:38:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:08 crc kubenswrapper[4747]: I1215 05:38:08.188246 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:08 crc kubenswrapper[4747]: I1215 05:38:08.188287 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:08 crc kubenswrapper[4747]: I1215 05:38:08.188301 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:08 crc kubenswrapper[4747]: I1215 05:38:08.188317 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:08 crc kubenswrapper[4747]: I1215 05:38:08.188330 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:08Z","lastTransitionTime":"2025-12-15T05:38:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:08 crc kubenswrapper[4747]: I1215 05:38:08.291089 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:08 crc kubenswrapper[4747]: I1215 05:38:08.291184 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:08 crc kubenswrapper[4747]: I1215 05:38:08.291198 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:08 crc kubenswrapper[4747]: I1215 05:38:08.291216 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:08 crc kubenswrapper[4747]: I1215 05:38:08.291225 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:08Z","lastTransitionTime":"2025-12-15T05:38:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:08 crc kubenswrapper[4747]: I1215 05:38:08.394306 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:08 crc kubenswrapper[4747]: I1215 05:38:08.394350 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:08 crc kubenswrapper[4747]: I1215 05:38:08.394362 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:08 crc kubenswrapper[4747]: I1215 05:38:08.394379 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:08 crc kubenswrapper[4747]: I1215 05:38:08.394392 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:08Z","lastTransitionTime":"2025-12-15T05:38:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:08 crc kubenswrapper[4747]: I1215 05:38:08.496620 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:08 crc kubenswrapper[4747]: I1215 05:38:08.496646 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:08 crc kubenswrapper[4747]: I1215 05:38:08.496655 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:08 crc kubenswrapper[4747]: I1215 05:38:08.496665 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:08 crc kubenswrapper[4747]: I1215 05:38:08.496675 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:08Z","lastTransitionTime":"2025-12-15T05:38:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:08 crc kubenswrapper[4747]: I1215 05:38:08.599374 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:08 crc kubenswrapper[4747]: I1215 05:38:08.599400 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:08 crc kubenswrapper[4747]: I1215 05:38:08.599411 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:08 crc kubenswrapper[4747]: I1215 05:38:08.599422 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:08 crc kubenswrapper[4747]: I1215 05:38:08.599432 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:08Z","lastTransitionTime":"2025-12-15T05:38:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:08 crc kubenswrapper[4747]: I1215 05:38:08.629133 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:38:08 crc kubenswrapper[4747]: I1215 05:38:08.629178 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:38:08 crc kubenswrapper[4747]: I1215 05:38:08.629138 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:38:08 crc kubenswrapper[4747]: E1215 05:38:08.629270 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:38:08 crc kubenswrapper[4747]: E1215 05:38:08.629423 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:38:08 crc kubenswrapper[4747]: E1215 05:38:08.629544 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:38:08 crc kubenswrapper[4747]: I1215 05:38:08.701258 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:08 crc kubenswrapper[4747]: I1215 05:38:08.701312 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:08 crc kubenswrapper[4747]: I1215 05:38:08.701322 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:08 crc kubenswrapper[4747]: I1215 05:38:08.701337 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:08 crc kubenswrapper[4747]: I1215 05:38:08.701348 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:08Z","lastTransitionTime":"2025-12-15T05:38:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:08 crc kubenswrapper[4747]: I1215 05:38:08.803292 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:08 crc kubenswrapper[4747]: I1215 05:38:08.803447 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:08 crc kubenswrapper[4747]: I1215 05:38:08.803521 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:08 crc kubenswrapper[4747]: I1215 05:38:08.803608 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:08 crc kubenswrapper[4747]: I1215 05:38:08.803684 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:08Z","lastTransitionTime":"2025-12-15T05:38:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:08 crc kubenswrapper[4747]: I1215 05:38:08.906306 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:08 crc kubenswrapper[4747]: I1215 05:38:08.906348 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:08 crc kubenswrapper[4747]: I1215 05:38:08.906361 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:08 crc kubenswrapper[4747]: I1215 05:38:08.906380 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:08 crc kubenswrapper[4747]: I1215 05:38:08.906391 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:08Z","lastTransitionTime":"2025-12-15T05:38:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.008953 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.008994 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.009006 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.009019 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.009027 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:09Z","lastTransitionTime":"2025-12-15T05:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.110994 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.111039 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.111048 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.111067 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.111078 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:09Z","lastTransitionTime":"2025-12-15T05:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.213347 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.213472 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.213535 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.213601 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.213652 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:09Z","lastTransitionTime":"2025-12-15T05:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.316056 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.316096 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.316107 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.316123 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.316133 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:09Z","lastTransitionTime":"2025-12-15T05:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.418493 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.418531 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.418543 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.418561 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.418572 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:09Z","lastTransitionTime":"2025-12-15T05:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.520698 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.520749 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.520762 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.520779 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.520795 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:09Z","lastTransitionTime":"2025-12-15T05:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.623317 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.623347 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.623357 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.623369 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.623380 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:09Z","lastTransitionTime":"2025-12-15T05:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.628776 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:38:09 crc kubenswrapper[4747]: E1215 05:38:09.628908 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.725765 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.725813 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.725827 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.725840 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.725849 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:09Z","lastTransitionTime":"2025-12-15T05:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.828546 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.828596 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.828610 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.828628 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.828640 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:09Z","lastTransitionTime":"2025-12-15T05:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.930697 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.930829 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.930915 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.931019 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:09 crc kubenswrapper[4747]: I1215 05:38:09.931082 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:09Z","lastTransitionTime":"2025-12-15T05:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.033031 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.033095 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.033114 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.033134 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.033153 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:10Z","lastTransitionTime":"2025-12-15T05:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.135571 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.135618 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.135633 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.135650 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.135664 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:10Z","lastTransitionTime":"2025-12-15T05:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.237661 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.237782 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.237859 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.237914 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.237991 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:10Z","lastTransitionTime":"2025-12-15T05:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.340425 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.340465 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.340475 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.340492 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.340507 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:10Z","lastTransitionTime":"2025-12-15T05:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.442645 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.442995 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.443075 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.443151 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.443218 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:10Z","lastTransitionTime":"2025-12-15T05:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.545270 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.545310 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.545324 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.545339 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.545350 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:10Z","lastTransitionTime":"2025-12-15T05:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.629240 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.629295 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.629387 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:38:10 crc kubenswrapper[4747]: E1215 05:38:10.629559 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:38:10 crc kubenswrapper[4747]: E1215 05:38:10.629741 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:38:10 crc kubenswrapper[4747]: E1215 05:38:10.629815 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.646840 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.646876 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.646886 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.646898 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.646909 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:10Z","lastTransitionTime":"2025-12-15T05:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.748981 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.749018 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.749031 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.749045 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.749059 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:10Z","lastTransitionTime":"2025-12-15T05:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.850787 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.850822 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.850833 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.850845 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.850854 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:10Z","lastTransitionTime":"2025-12-15T05:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.952862 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.952900 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.952912 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.952954 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:10 crc kubenswrapper[4747]: I1215 05:38:10.952967 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:10Z","lastTransitionTime":"2025-12-15T05:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.055399 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.055442 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.055458 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.055472 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.055481 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:11Z","lastTransitionTime":"2025-12-15T05:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.157230 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.157270 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.157280 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.157293 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.157306 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:11Z","lastTransitionTime":"2025-12-15T05:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.259459 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.259513 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.259525 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.259541 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.259550 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:11Z","lastTransitionTime":"2025-12-15T05:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.362678 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.362728 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.362743 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.362761 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.362773 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:11Z","lastTransitionTime":"2025-12-15T05:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.465336 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.465374 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.465386 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.465400 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.465411 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:11Z","lastTransitionTime":"2025-12-15T05:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.567188 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.567236 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.567249 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.567264 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.567277 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:11Z","lastTransitionTime":"2025-12-15T05:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.628767 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:38:11 crc kubenswrapper[4747]: E1215 05:38:11.628911 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.668760 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.668787 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.668807 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.668820 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.668830 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:11Z","lastTransitionTime":"2025-12-15T05:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.770667 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.770720 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.770737 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.770756 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.770770 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:11Z","lastTransitionTime":"2025-12-15T05:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.872600 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.872631 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.872640 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.872650 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.872660 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:11Z","lastTransitionTime":"2025-12-15T05:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.974692 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.974733 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.974747 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.974762 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:11 crc kubenswrapper[4747]: I1215 05:38:11.974774 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:11Z","lastTransitionTime":"2025-12-15T05:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.077526 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.077582 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.077596 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.077610 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.077620 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:12Z","lastTransitionTime":"2025-12-15T05:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.179163 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.179200 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.179210 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.179225 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.179234 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:12Z","lastTransitionTime":"2025-12-15T05:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.281317 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.281352 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.281369 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.281384 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.281393 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:12Z","lastTransitionTime":"2025-12-15T05:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.383460 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.383496 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.383506 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.383518 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.383527 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:12Z","lastTransitionTime":"2025-12-15T05:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.485217 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.485257 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.485269 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.485282 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.485294 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:12Z","lastTransitionTime":"2025-12-15T05:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.587628 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.587683 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.587694 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.587714 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.587730 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:12Z","lastTransitionTime":"2025-12-15T05:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.628504 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.628592 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.628504 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:38:12 crc kubenswrapper[4747]: E1215 05:38:12.628673 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:38:12 crc kubenswrapper[4747]: E1215 05:38:12.628820 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:38:12 crc kubenswrapper[4747]: E1215 05:38:12.628910 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.689383 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.689418 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.689430 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.689443 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.689454 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:12Z","lastTransitionTime":"2025-12-15T05:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.791375 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.791435 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.791447 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.791462 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.791490 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:12Z","lastTransitionTime":"2025-12-15T05:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.893827 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.893867 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.893877 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.893898 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.893911 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:12Z","lastTransitionTime":"2025-12-15T05:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.996199 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.996257 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.996266 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.996279 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:12 crc kubenswrapper[4747]: I1215 05:38:12.996291 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:12Z","lastTransitionTime":"2025-12-15T05:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:13 crc kubenswrapper[4747]: I1215 05:38:13.097879 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:13 crc kubenswrapper[4747]: I1215 05:38:13.097913 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:13 crc kubenswrapper[4747]: I1215 05:38:13.097939 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:13 crc kubenswrapper[4747]: I1215 05:38:13.097953 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:13 crc kubenswrapper[4747]: I1215 05:38:13.097962 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:13Z","lastTransitionTime":"2025-12-15T05:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:13 crc kubenswrapper[4747]: I1215 05:38:13.199870 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:13 crc kubenswrapper[4747]: I1215 05:38:13.199960 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:13 crc kubenswrapper[4747]: I1215 05:38:13.199979 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:13 crc kubenswrapper[4747]: I1215 05:38:13.199998 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:13 crc kubenswrapper[4747]: I1215 05:38:13.200012 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:13Z","lastTransitionTime":"2025-12-15T05:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:13 crc kubenswrapper[4747]: I1215 05:38:13.302435 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:13 crc kubenswrapper[4747]: I1215 05:38:13.302565 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:13 crc kubenswrapper[4747]: I1215 05:38:13.302632 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:13 crc kubenswrapper[4747]: I1215 05:38:13.302703 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:13 crc kubenswrapper[4747]: I1215 05:38:13.302758 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:13Z","lastTransitionTime":"2025-12-15T05:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:13 crc kubenswrapper[4747]: I1215 05:38:13.405066 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:13 crc kubenswrapper[4747]: I1215 05:38:13.405120 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:13 crc kubenswrapper[4747]: I1215 05:38:13.405131 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:13 crc kubenswrapper[4747]: I1215 05:38:13.405159 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:13 crc kubenswrapper[4747]: I1215 05:38:13.405170 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:13Z","lastTransitionTime":"2025-12-15T05:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:13 crc kubenswrapper[4747]: I1215 05:38:13.507458 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:13 crc kubenswrapper[4747]: I1215 05:38:13.507525 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:13 crc kubenswrapper[4747]: I1215 05:38:13.507534 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:13 crc kubenswrapper[4747]: I1215 05:38:13.507549 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:13 crc kubenswrapper[4747]: I1215 05:38:13.507561 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:13Z","lastTransitionTime":"2025-12-15T05:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:13 crc kubenswrapper[4747]: I1215 05:38:13.609367 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:13 crc kubenswrapper[4747]: I1215 05:38:13.609400 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:13 crc kubenswrapper[4747]: I1215 05:38:13.609411 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:13 crc kubenswrapper[4747]: I1215 05:38:13.609425 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:13 crc kubenswrapper[4747]: I1215 05:38:13.609433 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:13Z","lastTransitionTime":"2025-12-15T05:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:13 crc kubenswrapper[4747]: I1215 05:38:13.628380 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:38:13 crc kubenswrapper[4747]: E1215 05:38:13.628540 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:38:13 crc kubenswrapper[4747]: I1215 05:38:13.711219 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:13 crc kubenswrapper[4747]: I1215 05:38:13.711351 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:13 crc kubenswrapper[4747]: I1215 05:38:13.711422 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:13 crc kubenswrapper[4747]: I1215 05:38:13.711509 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:13 crc kubenswrapper[4747]: I1215 05:38:13.711571 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:13Z","lastTransitionTime":"2025-12-15T05:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:13 crc kubenswrapper[4747]: I1215 05:38:13.813751 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:13 crc kubenswrapper[4747]: I1215 05:38:13.813780 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:13 crc kubenswrapper[4747]: I1215 05:38:13.813788 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:13 crc kubenswrapper[4747]: I1215 05:38:13.813807 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:13 crc kubenswrapper[4747]: I1215 05:38:13.813816 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:13Z","lastTransitionTime":"2025-12-15T05:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:13 crc kubenswrapper[4747]: I1215 05:38:13.916367 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:13 crc kubenswrapper[4747]: I1215 05:38:13.916409 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:13 crc kubenswrapper[4747]: I1215 05:38:13.916423 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:13 crc kubenswrapper[4747]: I1215 05:38:13.916439 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:13 crc kubenswrapper[4747]: I1215 05:38:13.916450 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:13Z","lastTransitionTime":"2025-12-15T05:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.018500 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.018550 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.018561 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.018581 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.018592 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:14Z","lastTransitionTime":"2025-12-15T05:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.121111 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.121147 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.121157 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.121171 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.121184 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:14Z","lastTransitionTime":"2025-12-15T05:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.223105 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.223145 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.223158 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.223176 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.223189 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:14Z","lastTransitionTime":"2025-12-15T05:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.325255 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.325287 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.325296 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.325310 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.325321 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:14Z","lastTransitionTime":"2025-12-15T05:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.427140 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.427195 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.427206 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.427231 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.427243 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:14Z","lastTransitionTime":"2025-12-15T05:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.528619 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.528661 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.528677 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.528693 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.528703 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:14Z","lastTransitionTime":"2025-12-15T05:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.628406 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.628452 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.628458 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:38:14 crc kubenswrapper[4747]: E1215 05:38:14.628687 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:38:14 crc kubenswrapper[4747]: E1215 05:38:14.628808 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:38:14 crc kubenswrapper[4747]: E1215 05:38:14.628946 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.629562 4747 scope.go:117] "RemoveContainer" containerID="72337c4b670370cacb09ca1e4d1dc3de04c6106c45141c3a6fb1b89541dddc93" Dec 15 05:38:14 crc kubenswrapper[4747]: E1215 05:38:14.629709 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-82lhw_openshift-ovn-kubernetes(2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.630258 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.630289 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.630299 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.630314 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.630325 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:14Z","lastTransitionTime":"2025-12-15T05:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.731809 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.731850 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.731862 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.731875 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.731886 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:14Z","lastTransitionTime":"2025-12-15T05:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.834073 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.834106 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.834116 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.834128 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.834138 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:14Z","lastTransitionTime":"2025-12-15T05:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.936084 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.936127 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.936138 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.936157 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:14 crc kubenswrapper[4747]: I1215 05:38:14.936171 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:14Z","lastTransitionTime":"2025-12-15T05:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.037832 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.037856 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.037866 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.037878 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.037886 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:15Z","lastTransitionTime":"2025-12-15T05:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.139692 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.139724 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.139735 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.139745 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.139754 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:15Z","lastTransitionTime":"2025-12-15T05:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.241983 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.242012 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.242021 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.242034 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.242043 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:15Z","lastTransitionTime":"2025-12-15T05:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.344159 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.344220 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.344233 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.344251 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.344264 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:15Z","lastTransitionTime":"2025-12-15T05:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.446313 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.446355 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.446365 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.446384 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.446396 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:15Z","lastTransitionTime":"2025-12-15T05:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.548712 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.548757 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.548770 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.548788 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.548810 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:15Z","lastTransitionTime":"2025-12-15T05:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.628594 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:38:15 crc kubenswrapper[4747]: E1215 05:38:15.628715 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.650821 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.650861 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.650871 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.650886 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.650898 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:15Z","lastTransitionTime":"2025-12-15T05:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.753192 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.753234 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.753247 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.753263 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.753277 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:15Z","lastTransitionTime":"2025-12-15T05:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.855651 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.855696 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.855708 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.855724 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.855738 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:15Z","lastTransitionTime":"2025-12-15T05:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.958178 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.958210 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.958221 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.958233 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:15 crc kubenswrapper[4747]: I1215 05:38:15.958244 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:15Z","lastTransitionTime":"2025-12-15T05:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.060332 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.060367 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.060473 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.060735 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.060757 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:16Z","lastTransitionTime":"2025-12-15T05:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.162451 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.162496 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.162508 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.162524 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.162536 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:16Z","lastTransitionTime":"2025-12-15T05:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.264641 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.264695 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.264709 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.264731 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.264742 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:16Z","lastTransitionTime":"2025-12-15T05:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.366186 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.366221 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.366230 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.366246 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.366256 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:16Z","lastTransitionTime":"2025-12-15T05:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.468621 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.468658 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.468670 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.468682 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.468692 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:16Z","lastTransitionTime":"2025-12-15T05:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.571005 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.571048 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.571059 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.571073 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.571083 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:16Z","lastTransitionTime":"2025-12-15T05:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.628508 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:38:16 crc kubenswrapper[4747]: E1215 05:38:16.628636 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.628663 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:38:16 crc kubenswrapper[4747]: E1215 05:38:16.628764 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.628849 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:38:16 crc kubenswrapper[4747]: E1215 05:38:16.629031 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.644069 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a223bb34540a7bc57028079a1b97da25865265d93931192b6be84a5625714aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:16Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.654201 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7ba935d-4d45-497f-a710-482288987eb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d682a9462fba61e03c438d541888778564c5f9614b20ae3415d06039a1b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://918387852c8b6a10cbef90523b68f21472cb57394fe3107fb6a96ac8e76ada07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ff0666091801d67feef4ab5998d6a9c037afa1781db60c2f67046f3ec99a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bc359099b8d59b6ef6bdf25fa9b5b30e446e9897cfb56b16408dab6a88f8c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bc359099b8d59b6ef6bdf25fa9b5b30e446e9897cfb56b16408dab6a88f8c54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:16Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.665224 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:16Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.673505 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.673540 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.673551 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.673564 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.673575 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:16Z","lastTransitionTime":"2025-12-15T05:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.674660 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:16Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.685219 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pc5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39f3b6ddce69ca768ed97f2305771f13fdcc0608fd58b91ee8f9c000e6786af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pc5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:16Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.694289 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d50e5c9-7ce9-40c0-b942-01031654d27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3d9fff925cadcb3478997fe799c7fde5e75f670ad1309f5c6b15c3c534b673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8e211d6177b24044383a0cc22cceb80f6442489a5d2b4adbafc3d36637b3a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nldtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:16Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.708271 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72337c4b670370cacb09ca1e4d1dc3de04c6106c45141c3a6fb1b89541dddc93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72337c4b670370cacb09ca1e4d1dc3de04c6106c45141c3a6fb1b89541dddc93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"message\\\":\\\"reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1215 05:37:57.248195 6397 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1215 05:37:57.248522 6397 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1215 05:37:57.254515 6397 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1215 05:37:57.254583 6397 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1215 05:37:57.254642 6397 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1215 05:37:57.254715 6397 handler.go:208] Removed *v1.Node event handler 2\\\\nI1215 05:37:57.254743 6397 factory.go:656] Stopping watch factory\\\\nI1215 05:37:57.286414 6397 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1215 05:37:57.286444 6397 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1215 05:37:57.286715 6397 ovnkube.go:599] Stopped ovnkube\\\\nI1215 05:37:57.286741 6397 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1215 05:37:57.286813 6397 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-82lhw_openshift-ovn-kubernetes(2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-82lhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:16Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.722247 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31db1d28-81c8-4eae-989c-49168bd4e711\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f0ff77c32912e767410211079dffc4e0681cf67e040e2cff227825813908d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://675507d9779ef73f4e25201102562a79a43654bce4a6e3b363f03e0e0b697578\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-15T05:37:33Z\\\",\\\"message\\\":\\\"le observer\\\\nW1215 05:37:33.642096 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1215 05:37:33.642525 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1215 05:37:33.645063 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1979444596/tls.crt::/tmp/serving-cert-1979444596/tls.key\\\\\\\"\\\\nI1215 05:37:33.801352 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1215 05:37:33.804097 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1215 05:37:33.804116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1215 05:37:33.804137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1215 05:37:33.804142 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1215 05:37:33.809816 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1215 05:37:33.809843 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1215 05:37:33.809857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1215 05:37:33.809860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1215 05:37:33.809862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1215 05:37:33.810091 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1215 05:37:33.812167 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:16Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.732668 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8b29ca57cdee8b65a3f3a0536e36f0e7e4666850221e623ca0f8186fd18e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:16Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.742857 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://582ccb05087bcbafc98ab0ee2114e9045b2ff301e10ba99cf8ddcd285cae9d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a10d569d887d2d91a5cb36b7193d4bfecf6a3177e238396b8863806aacf9df19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:16Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.751059 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cltgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3c83e90-bb8c-4909-9633-8f59ca12db6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5babf7321502d73aabf3a859529005a01ff938a5bdb4032d6eee6ace84be575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg2m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cltgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:16Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.759320 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p2w9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abdca76-2fcd-44fc-a09d-ded3084306d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e6c91da690038d3b65310ea18371acab81ebb6542f2072f4ac4e08f74f04de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fw7nq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p2w9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:16Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.770187 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmfps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89350c5d-9a77-499e-81ec-376b012cc219\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31dada17d2bfd9fb0e40e0e67d3d41379a6dabe9f1a7db2b2367a66d120d7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbpgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmfps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:16Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.775327 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.775364 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.775377 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.775393 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.775407 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:16Z","lastTransitionTime":"2025-12-15T05:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.778048 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4nn8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca0b2d2-cd19-409a-aa6d-df8b295adf62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bn8pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bn8pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4nn8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:16Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.787890 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02b1ad69-88a4-40e6-a7c4-409b8d42f0cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317d78781624437d29c71b7ca4fa83e80bd20b5f05ab3ed6d1f7177abf0b77d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe7a2e5d3edede6651bef4bed5fb61df5e777cb2da607c135c9e529ad6d58fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://253522075c9e770e558c77f98b826cbb502f402d3806f640dc78d752baf574c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23a7828ef5239d7d64be1ee51a07c50e713808269c5ba2c94fde9e3a2470713a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:16Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.798435 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:16Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.807983 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82d2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9b71b51-500c-4932-b19a-559ec3d15a5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4bc1257d5ff3f04dcdce005fadc221c444afadbe862174495f6191537a58970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjjhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e44712601c304d309e50b76c61dba25a4fd6d982f6dd1df36fb046b0473bb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjjhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-82d2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:16Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.877159 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.877204 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.877225 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.877249 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.877267 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:16Z","lastTransitionTime":"2025-12-15T05:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.979207 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.979243 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.979254 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.979275 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:16 crc kubenswrapper[4747]: I1215 05:38:16.979286 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:16Z","lastTransitionTime":"2025-12-15T05:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.081867 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.081912 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.081941 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.081963 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.081981 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:17Z","lastTransitionTime":"2025-12-15T05:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.183921 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.183990 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.184002 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.184020 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.184033 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:17Z","lastTransitionTime":"2025-12-15T05:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.286574 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.286609 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.286621 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.286639 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.286649 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:17Z","lastTransitionTime":"2025-12-15T05:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.388686 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.388720 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.388732 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.388744 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.388755 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:17Z","lastTransitionTime":"2025-12-15T05:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.491884 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.491915 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.491942 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.491960 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.491971 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:17Z","lastTransitionTime":"2025-12-15T05:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.594184 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.594232 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.594245 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.594267 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.594283 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:17Z","lastTransitionTime":"2025-12-15T05:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.628762 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:38:17 crc kubenswrapper[4747]: E1215 05:38:17.628903 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.696376 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.696441 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.696452 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.696472 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.696486 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:17Z","lastTransitionTime":"2025-12-15T05:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.799398 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.799446 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.799459 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.799478 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.799489 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:17Z","lastTransitionTime":"2025-12-15T05:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.902573 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.902634 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.902645 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.902665 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.902677 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:17Z","lastTransitionTime":"2025-12-15T05:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.997992 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.998045 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.998055 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.998072 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:17 crc kubenswrapper[4747]: I1215 05:38:17.998084 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:17Z","lastTransitionTime":"2025-12-15T05:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:18 crc kubenswrapper[4747]: E1215 05:38:18.009547 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4ef03cda-5cb6-4966-bf0f-23d213ae8ebc\\\",\\\"systemUUID\\\":\\\"f6e6c4c5-517c-43b9-abbe-241c399d7f32\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:18Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.012892 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.012942 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.012954 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.012965 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.012975 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:18Z","lastTransitionTime":"2025-12-15T05:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:18 crc kubenswrapper[4747]: E1215 05:38:18.022116 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4ef03cda-5cb6-4966-bf0f-23d213ae8ebc\\\",\\\"systemUUID\\\":\\\"f6e6c4c5-517c-43b9-abbe-241c399d7f32\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:18Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.025588 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.025622 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.025632 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.025670 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.025694 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:18Z","lastTransitionTime":"2025-12-15T05:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:18 crc kubenswrapper[4747]: E1215 05:38:18.035776 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4ef03cda-5cb6-4966-bf0f-23d213ae8ebc\\\",\\\"systemUUID\\\":\\\"f6e6c4c5-517c-43b9-abbe-241c399d7f32\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:18Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.038740 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.038795 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.038814 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.038826 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.038836 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:18Z","lastTransitionTime":"2025-12-15T05:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:18 crc kubenswrapper[4747]: E1215 05:38:18.047366 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4ef03cda-5cb6-4966-bf0f-23d213ae8ebc\\\",\\\"systemUUID\\\":\\\"f6e6c4c5-517c-43b9-abbe-241c399d7f32\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:18Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.050717 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.050752 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.050762 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.050778 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.050790 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:18Z","lastTransitionTime":"2025-12-15T05:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:18 crc kubenswrapper[4747]: E1215 05:38:18.060509 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4ef03cda-5cb6-4966-bf0f-23d213ae8ebc\\\",\\\"systemUUID\\\":\\\"f6e6c4c5-517c-43b9-abbe-241c399d7f32\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:18Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:18 crc kubenswrapper[4747]: E1215 05:38:18.060628 4747 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.061887 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.061910 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.061944 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.061957 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.061965 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:18Z","lastTransitionTime":"2025-12-15T05:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.163875 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.163919 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.163945 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.163964 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.163977 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:18Z","lastTransitionTime":"2025-12-15T05:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.265956 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.265999 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.266013 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.266032 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.266045 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:18Z","lastTransitionTime":"2025-12-15T05:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.368009 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.368048 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.368059 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.368074 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.368087 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:18Z","lastTransitionTime":"2025-12-15T05:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.469776 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.469821 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.469831 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.469844 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.469853 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:18Z","lastTransitionTime":"2025-12-15T05:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.571736 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.571769 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.571778 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.571788 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.571808 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:18Z","lastTransitionTime":"2025-12-15T05:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.628622 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:38:18 crc kubenswrapper[4747]: E1215 05:38:18.629134 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.628810 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:38:18 crc kubenswrapper[4747]: E1215 05:38:18.629560 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.628697 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:38:18 crc kubenswrapper[4747]: E1215 05:38:18.629872 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.673886 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.673952 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.673964 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.673979 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.673996 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:18Z","lastTransitionTime":"2025-12-15T05:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.776188 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.776213 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.776222 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.776252 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.776263 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:18Z","lastTransitionTime":"2025-12-15T05:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.878301 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.878351 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.878368 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.878380 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.878387 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:18Z","lastTransitionTime":"2025-12-15T05:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.980819 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.980848 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.980857 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.980870 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:18 crc kubenswrapper[4747]: I1215 05:38:18.980880 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:18Z","lastTransitionTime":"2025-12-15T05:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:19 crc kubenswrapper[4747]: I1215 05:38:19.082566 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:19 crc kubenswrapper[4747]: I1215 05:38:19.082607 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:19 crc kubenswrapper[4747]: I1215 05:38:19.082620 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:19 crc kubenswrapper[4747]: I1215 05:38:19.082638 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:19 crc kubenswrapper[4747]: I1215 05:38:19.082648 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:19Z","lastTransitionTime":"2025-12-15T05:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:19 crc kubenswrapper[4747]: I1215 05:38:19.184141 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:19 crc kubenswrapper[4747]: I1215 05:38:19.184177 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:19 crc kubenswrapper[4747]: I1215 05:38:19.184185 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:19 crc kubenswrapper[4747]: I1215 05:38:19.184197 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:19 crc kubenswrapper[4747]: I1215 05:38:19.184206 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:19Z","lastTransitionTime":"2025-12-15T05:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:19 crc kubenswrapper[4747]: I1215 05:38:19.286086 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:19 crc kubenswrapper[4747]: I1215 05:38:19.286135 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:19 crc kubenswrapper[4747]: I1215 05:38:19.286146 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:19 crc kubenswrapper[4747]: I1215 05:38:19.286167 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:19 crc kubenswrapper[4747]: I1215 05:38:19.286180 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:19Z","lastTransitionTime":"2025-12-15T05:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:19 crc kubenswrapper[4747]: I1215 05:38:19.389838 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:19 crc kubenswrapper[4747]: I1215 05:38:19.389873 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:19 crc kubenswrapper[4747]: I1215 05:38:19.389883 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:19 crc kubenswrapper[4747]: I1215 05:38:19.389897 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:19 crc kubenswrapper[4747]: I1215 05:38:19.389906 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:19Z","lastTransitionTime":"2025-12-15T05:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:19 crc kubenswrapper[4747]: I1215 05:38:19.491911 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:19 crc kubenswrapper[4747]: I1215 05:38:19.491970 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:19 crc kubenswrapper[4747]: I1215 05:38:19.491982 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:19 crc kubenswrapper[4747]: I1215 05:38:19.491995 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:19 crc kubenswrapper[4747]: I1215 05:38:19.492006 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:19Z","lastTransitionTime":"2025-12-15T05:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:19 crc kubenswrapper[4747]: I1215 05:38:19.593857 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:19 crc kubenswrapper[4747]: I1215 05:38:19.593907 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:19 crc kubenswrapper[4747]: I1215 05:38:19.593919 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:19 crc kubenswrapper[4747]: I1215 05:38:19.593969 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:19 crc kubenswrapper[4747]: I1215 05:38:19.593982 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:19Z","lastTransitionTime":"2025-12-15T05:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:19 crc kubenswrapper[4747]: I1215 05:38:19.628418 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:38:19 crc kubenswrapper[4747]: E1215 05:38:19.628545 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:38:19 crc kubenswrapper[4747]: I1215 05:38:19.672197 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fca0b2d2-cd19-409a-aa6d-df8b295adf62-metrics-certs\") pod \"network-metrics-daemon-4nn8g\" (UID: \"fca0b2d2-cd19-409a-aa6d-df8b295adf62\") " pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:38:19 crc kubenswrapper[4747]: E1215 05:38:19.672381 4747 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 15 05:38:19 crc kubenswrapper[4747]: E1215 05:38:19.672448 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fca0b2d2-cd19-409a-aa6d-df8b295adf62-metrics-certs podName:fca0b2d2-cd19-409a-aa6d-df8b295adf62 nodeName:}" failed. No retries permitted until 2025-12-15 05:38:51.67242954 +0000 UTC m=+95.368941458 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fca0b2d2-cd19-409a-aa6d-df8b295adf62-metrics-certs") pod "network-metrics-daemon-4nn8g" (UID: "fca0b2d2-cd19-409a-aa6d-df8b295adf62") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 15 05:38:19 crc kubenswrapper[4747]: I1215 05:38:19.695998 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:19 crc kubenswrapper[4747]: I1215 05:38:19.696064 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:19 crc kubenswrapper[4747]: I1215 05:38:19.696078 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:19 crc kubenswrapper[4747]: I1215 05:38:19.696100 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:19 crc kubenswrapper[4747]: I1215 05:38:19.696115 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:19Z","lastTransitionTime":"2025-12-15T05:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:19 crc kubenswrapper[4747]: I1215 05:38:19.798645 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:19 crc kubenswrapper[4747]: I1215 05:38:19.798694 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:19 crc kubenswrapper[4747]: I1215 05:38:19.798704 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:19 crc kubenswrapper[4747]: I1215 05:38:19.798723 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:19 crc kubenswrapper[4747]: I1215 05:38:19.798736 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:19Z","lastTransitionTime":"2025-12-15T05:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:19 crc kubenswrapper[4747]: I1215 05:38:19.901179 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:19 crc kubenswrapper[4747]: I1215 05:38:19.901226 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:19 crc kubenswrapper[4747]: I1215 05:38:19.901236 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:19 crc kubenswrapper[4747]: I1215 05:38:19.901252 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:19 crc kubenswrapper[4747]: I1215 05:38:19.901263 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:19Z","lastTransitionTime":"2025-12-15T05:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.003739 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.003770 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.003780 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.003796 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.003815 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:20Z","lastTransitionTime":"2025-12-15T05:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.105756 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.105812 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.105825 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.105845 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.105857 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:20Z","lastTransitionTime":"2025-12-15T05:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.208188 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.208233 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.208243 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.208259 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.208271 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:20Z","lastTransitionTime":"2025-12-15T05:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.310301 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.310345 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.310355 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.310376 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.310391 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:20Z","lastTransitionTime":"2025-12-15T05:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.411999 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.412051 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.412064 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.412081 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.412092 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:20Z","lastTransitionTime":"2025-12-15T05:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.514355 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.514402 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.514413 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.514430 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.514440 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:20Z","lastTransitionTime":"2025-12-15T05:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.616395 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.616444 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.616457 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.616478 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.616495 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:20Z","lastTransitionTime":"2025-12-15T05:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.629109 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.629136 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.629143 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:38:20 crc kubenswrapper[4747]: E1215 05:38:20.629280 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:38:20 crc kubenswrapper[4747]: E1215 05:38:20.629619 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:38:20 crc kubenswrapper[4747]: E1215 05:38:20.629543 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.718185 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.718215 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.718224 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.718255 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.718269 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:20Z","lastTransitionTime":"2025-12-15T05:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.820468 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.820571 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.820663 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.820728 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.820817 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:20Z","lastTransitionTime":"2025-12-15T05:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.923298 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.923409 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.923461 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.923511 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:20 crc kubenswrapper[4747]: I1215 05:38:20.923558 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:20Z","lastTransitionTime":"2025-12-15T05:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.026336 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.026671 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.026683 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.026695 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.026703 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:21Z","lastTransitionTime":"2025-12-15T05:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.128821 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.128863 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.128875 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.128891 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.128903 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:21Z","lastTransitionTime":"2025-12-15T05:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.231212 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.231248 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.231259 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.231274 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.231285 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:21Z","lastTransitionTime":"2025-12-15T05:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.333198 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.333439 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.333450 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.333463 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.333470 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:21Z","lastTransitionTime":"2025-12-15T05:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.435043 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.435088 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.435099 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.435114 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.435125 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:21Z","lastTransitionTime":"2025-12-15T05:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.536732 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.536775 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.536809 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.536823 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.536834 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:21Z","lastTransitionTime":"2025-12-15T05:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.628898 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:38:21 crc kubenswrapper[4747]: E1215 05:38:21.629046 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.638606 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.638639 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.638651 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.638668 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.638680 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:21Z","lastTransitionTime":"2025-12-15T05:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.741232 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.741274 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.741284 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.741302 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.741313 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:21Z","lastTransitionTime":"2025-12-15T05:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.843431 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.843488 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.843499 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.843522 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.843537 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:21Z","lastTransitionTime":"2025-12-15T05:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.945607 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.945645 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.945657 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.945672 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.945685 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:21Z","lastTransitionTime":"2025-12-15T05:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.948978 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gmfps_89350c5d-9a77-499e-81ec-376b012cc219/kube-multus/0.log" Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.949037 4747 generic.go:334] "Generic (PLEG): container finished" podID="89350c5d-9a77-499e-81ec-376b012cc219" containerID="31dada17d2bfd9fb0e40e0e67d3d41379a6dabe9f1a7db2b2367a66d120d7e0d" exitCode=1 Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.949073 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gmfps" event={"ID":"89350c5d-9a77-499e-81ec-376b012cc219","Type":"ContainerDied","Data":"31dada17d2bfd9fb0e40e0e67d3d41379a6dabe9f1a7db2b2367a66d120d7e0d"} Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.949454 4747 scope.go:117] "RemoveContainer" containerID="31dada17d2bfd9fb0e40e0e67d3d41379a6dabe9f1a7db2b2367a66d120d7e0d" Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.967871 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pc5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39f3b6ddce69ca768ed97f2305771f13fdcc0608fd58b91ee8f9c000e6786af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pc5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:21Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.980401 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d50e5c9-7ce9-40c0-b942-01031654d27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3d9fff925cadcb3478997fe799c7fde5e75f670ad1309f5c6b15c3c534b673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8e211d6177b24044383a0cc22cceb80f6442489a5d2b4adbafc3d36637b3a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nldtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:21Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:21 crc kubenswrapper[4747]: I1215 05:38:21.995327 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72337c4b670370cacb09ca1e4d1dc3de04c6106c45141c3a6fb1b89541dddc93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72337c4b670370cacb09ca1e4d1dc3de04c6106c45141c3a6fb1b89541dddc93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"message\\\":\\\"reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1215 05:37:57.248195 6397 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1215 05:37:57.248522 6397 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1215 05:37:57.254515 6397 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1215 05:37:57.254583 6397 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1215 05:37:57.254642 6397 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1215 05:37:57.254715 6397 handler.go:208] Removed *v1.Node event handler 2\\\\nI1215 05:37:57.254743 6397 factory.go:656] Stopping watch factory\\\\nI1215 05:37:57.286414 6397 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1215 05:37:57.286444 6397 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1215 05:37:57.286715 6397 ovnkube.go:599] Stopped ovnkube\\\\nI1215 05:37:57.286741 6397 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1215 05:37:57.286813 6397 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-82lhw_openshift-ovn-kubernetes(2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-82lhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:21Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.005175 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7ba935d-4d45-497f-a710-482288987eb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d682a9462fba61e03c438d541888778564c5f9614b20ae3415d06039a1b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://918387852c8b6a10cbef90523b68f21472cb57394fe3107fb6a96ac8e76ada07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ff0666091801d67feef4ab5998d6a9c037afa1781db60c2f67046f3ec99a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bc359099b8d59b6ef6bdf25fa9b5b30e446e9897cfb56b16408dab6a88f8c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bc359099b8d59b6ef6bdf25fa9b5b30e446e9897cfb56b16408dab6a88f8c54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:22Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.014094 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:22Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.022527 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:22Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.029697 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p2w9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abdca76-2fcd-44fc-a09d-ded3084306d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e6c91da690038d3b65310ea18371acab81ebb6542f2072f4ac4e08f74f04de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fw7nq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p2w9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:22Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.039215 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmfps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89350c5d-9a77-499e-81ec-376b012cc219\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31dada17d2bfd9fb0e40e0e67d3d41379a6dabe9f1a7db2b2367a66d120d7e0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31dada17d2bfd9fb0e40e0e67d3d41379a6dabe9f1a7db2b2367a66d120d7e0d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-15T05:38:20Z\\\",\\\"message\\\":\\\"2025-12-15T05:37:35+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a6c939b8-0d27-4921-8e30-40e8be11eed4\\\\n2025-12-15T05:37:35+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a6c939b8-0d27-4921-8e30-40e8be11eed4 to /host/opt/cni/bin/\\\\n2025-12-15T05:37:35Z [verbose] multus-daemon started\\\\n2025-12-15T05:37:35Z [verbose] Readiness Indicator file check\\\\n2025-12-15T05:38:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbpgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmfps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:22Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.047329 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4nn8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca0b2d2-cd19-409a-aa6d-df8b295adf62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bn8pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bn8pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4nn8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:22Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.047562 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.047584 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.047596 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.047616 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.047632 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:22Z","lastTransitionTime":"2025-12-15T05:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.057768 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31db1d28-81c8-4eae-989c-49168bd4e711\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f0ff77c32912e767410211079dffc4e0681cf67e040e2cff227825813908d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://675507d9779ef73f4e25201102562a79a43654bce4a6e3b363f03e0e0b697578\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-15T05:37:33Z\\\",\\\"message\\\":\\\"le observer\\\\nW1215 05:37:33.642096 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1215 05:37:33.642525 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1215 05:37:33.645063 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1979444596/tls.crt::/tmp/serving-cert-1979444596/tls.key\\\\\\\"\\\\nI1215 05:37:33.801352 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1215 05:37:33.804097 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1215 05:37:33.804116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1215 05:37:33.804137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1215 05:37:33.804142 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1215 05:37:33.809816 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1215 05:37:33.809843 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1215 05:37:33.809857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1215 05:37:33.809860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1215 05:37:33.809862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1215 05:37:33.810091 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1215 05:37:33.812167 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:22Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.067377 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8b29ca57cdee8b65a3f3a0536e36f0e7e4666850221e623ca0f8186fd18e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:22Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.076168 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://582ccb05087bcbafc98ab0ee2114e9045b2ff301e10ba99cf8ddcd285cae9d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a10d569d887d2d91a5cb36b7193d4bfecf6a3177e238396b8863806aacf9df19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:22Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.084283 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cltgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3c83e90-bb8c-4909-9633-8f59ca12db6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5babf7321502d73aabf3a859529005a01ff938a5bdb4032d6eee6ace84be575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg2m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cltgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:22Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.092856 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02b1ad69-88a4-40e6-a7c4-409b8d42f0cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317d78781624437d29c71b7ca4fa83e80bd20b5f05ab3ed6d1f7177abf0b77d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe7a2e5d3edede6651bef4bed5fb61df5e777cb2da607c135c9e529ad6d58fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://253522075c9e770e558c77f98b826cbb502f402d3806f640dc78d752baf574c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23a7828ef5239d7d64be1ee51a07c50e713808269c5ba2c94fde9e3a2470713a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:22Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.101573 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:22Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.109654 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82d2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9b71b51-500c-4932-b19a-559ec3d15a5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4bc1257d5ff3f04dcdce005fadc221c444afadbe862174495f6191537a58970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjjhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e44712601c304d309e50b76c61dba25a4fd6d982f6dd1df36fb046b0473bb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjjhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-82d2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:22Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.118122 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a223bb34540a7bc57028079a1b97da25865265d93931192b6be84a5625714aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:22Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.150252 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.150279 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.150289 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.150306 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.150318 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:22Z","lastTransitionTime":"2025-12-15T05:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.252065 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.252104 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.252115 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.252134 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.252150 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:22Z","lastTransitionTime":"2025-12-15T05:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.353736 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.353888 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.353974 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.354038 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.354093 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:22Z","lastTransitionTime":"2025-12-15T05:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.456017 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.456056 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.456067 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.456082 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.456092 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:22Z","lastTransitionTime":"2025-12-15T05:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.559581 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.559637 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.559648 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.559670 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.559681 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:22Z","lastTransitionTime":"2025-12-15T05:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.628373 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.628558 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.628492 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:38:22 crc kubenswrapper[4747]: E1215 05:38:22.628746 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:38:22 crc kubenswrapper[4747]: E1215 05:38:22.628864 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:38:22 crc kubenswrapper[4747]: E1215 05:38:22.629019 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.661997 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.662037 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.662047 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.662060 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.662071 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:22Z","lastTransitionTime":"2025-12-15T05:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.763789 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.763833 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.763842 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.763859 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.763867 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:22Z","lastTransitionTime":"2025-12-15T05:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.865938 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.865979 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.865991 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.866007 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.866018 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:22Z","lastTransitionTime":"2025-12-15T05:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.954272 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gmfps_89350c5d-9a77-499e-81ec-376b012cc219/kube-multus/0.log" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.954345 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gmfps" event={"ID":"89350c5d-9a77-499e-81ec-376b012cc219","Type":"ContainerStarted","Data":"bf7e29913438085594b529ef0499bebcb5d59f0027e5c46d493eb0316c2c553c"} Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.968090 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.968136 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.968036 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31db1d28-81c8-4eae-989c-49168bd4e711\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f0ff77c32912e767410211079dffc4e0681cf67e040e2cff227825813908d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://675507d9779ef73f4e25201102562a79a43654bce4a6e3b363f03e0e0b697578\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-15T05:37:33Z\\\",\\\"message\\\":\\\"le observer\\\\nW1215 05:37:33.642096 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1215 05:37:33.642525 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1215 05:37:33.645063 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1979444596/tls.crt::/tmp/serving-cert-1979444596/tls.key\\\\\\\"\\\\nI1215 05:37:33.801352 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1215 05:37:33.804097 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1215 05:37:33.804116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1215 05:37:33.804137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1215 05:37:33.804142 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1215 05:37:33.809816 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1215 05:37:33.809843 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1215 05:37:33.809857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1215 05:37:33.809860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1215 05:37:33.809862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1215 05:37:33.810091 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1215 05:37:33.812167 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:22Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.968150 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.968271 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.968284 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:22Z","lastTransitionTime":"2025-12-15T05:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.980247 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8b29ca57cdee8b65a3f3a0536e36f0e7e4666850221e623ca0f8186fd18e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:22Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.990552 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://582ccb05087bcbafc98ab0ee2114e9045b2ff301e10ba99cf8ddcd285cae9d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a10d569d887d2d91a5cb36b7193d4bfecf6a3177e238396b8863806aacf9df19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:22Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:22 crc kubenswrapper[4747]: I1215 05:38:22.999268 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cltgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3c83e90-bb8c-4909-9633-8f59ca12db6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5babf7321502d73aabf3a859529005a01ff938a5bdb4032d6eee6ace84be575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg2m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cltgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:22Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.007282 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p2w9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abdca76-2fcd-44fc-a09d-ded3084306d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e6c91da690038d3b65310ea18371acab81ebb6542f2072f4ac4e08f74f04de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fw7nq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p2w9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:23Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.016059 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmfps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89350c5d-9a77-499e-81ec-376b012cc219\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7e29913438085594b529ef0499bebcb5d59f0027e5c46d493eb0316c2c553c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31dada17d2bfd9fb0e40e0e67d3d41379a6dabe9f1a7db2b2367a66d120d7e0d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-15T05:38:20Z\\\",\\\"message\\\":\\\"2025-12-15T05:37:35+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a6c939b8-0d27-4921-8e30-40e8be11eed4\\\\n2025-12-15T05:37:35+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a6c939b8-0d27-4921-8e30-40e8be11eed4 to /host/opt/cni/bin/\\\\n2025-12-15T05:37:35Z [verbose] multus-daemon started\\\\n2025-12-15T05:37:35Z [verbose] Readiness Indicator file check\\\\n2025-12-15T05:38:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:38:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbpgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmfps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:23Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.026280 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4nn8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca0b2d2-cd19-409a-aa6d-df8b295adf62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bn8pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bn8pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4nn8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:23Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.035973 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02b1ad69-88a4-40e6-a7c4-409b8d42f0cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317d78781624437d29c71b7ca4fa83e80bd20b5f05ab3ed6d1f7177abf0b77d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe7a2e5d3edede6651bef4bed5fb61df5e777cb2da607c135c9e529ad6d58fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://253522075c9e770e558c77f98b826cbb502f402d3806f640dc78d752baf574c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23a7828ef5239d7d64be1ee51a07c50e713808269c5ba2c94fde9e3a2470713a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:23Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.044714 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:23Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.053285 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82d2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9b71b51-500c-4932-b19a-559ec3d15a5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4bc1257d5ff3f04dcdce005fadc221c444afadbe862174495f6191537a58970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjjhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e44712601c304d309e50b76c61dba25a4fd6d982f6dd1df36fb046b0473bb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjjhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-82d2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:23Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.061785 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a223bb34540a7bc57028079a1b97da25865265d93931192b6be84a5625714aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:23Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.070396 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.070431 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.070444 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.070463 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.070477 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:23Z","lastTransitionTime":"2025-12-15T05:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.070995 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7ba935d-4d45-497f-a710-482288987eb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d682a9462fba61e03c438d541888778564c5f9614b20ae3415d06039a1b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://918387852c8b6a10cbef90523b68f21472cb57394fe3107fb6a96ac8e76ada07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ff0666091801d67feef4ab5998d6a9c037afa1781db60c2f67046f3ec99a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bc359099b8d59b6ef6bdf25fa9b5b30e446e9897cfb56b16408dab6a88f8c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bc359099b8d59b6ef6bdf25fa9b5b30e446e9897cfb56b16408dab6a88f8c54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:23Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.079731 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:23Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.088276 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:23Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.098080 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pc5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39f3b6ddce69ca768ed97f2305771f13fdcc0608fd58b91ee8f9c000e6786af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pc5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:23Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.105392 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d50e5c9-7ce9-40c0-b942-01031654d27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3d9fff925cadcb3478997fe799c7fde5e75f670ad1309f5c6b15c3c534b673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8e211d6177b24044383a0cc22cceb80f6442489a5d2b4adbafc3d36637b3a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nldtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:23Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.118475 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72337c4b670370cacb09ca1e4d1dc3de04c6106c45141c3a6fb1b89541dddc93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72337c4b670370cacb09ca1e4d1dc3de04c6106c45141c3a6fb1b89541dddc93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"message\\\":\\\"reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1215 05:37:57.248195 6397 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1215 05:37:57.248522 6397 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1215 05:37:57.254515 6397 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1215 05:37:57.254583 6397 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1215 05:37:57.254642 6397 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1215 05:37:57.254715 6397 handler.go:208] Removed *v1.Node event handler 2\\\\nI1215 05:37:57.254743 6397 factory.go:656] Stopping watch factory\\\\nI1215 05:37:57.286414 6397 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1215 05:37:57.286444 6397 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1215 05:37:57.286715 6397 ovnkube.go:599] Stopped ovnkube\\\\nI1215 05:37:57.286741 6397 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1215 05:37:57.286813 6397 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-82lhw_openshift-ovn-kubernetes(2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-82lhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:23Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.172870 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.172908 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.172919 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.172957 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.172970 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:23Z","lastTransitionTime":"2025-12-15T05:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.275128 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.275162 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.275174 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.275191 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.275203 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:23Z","lastTransitionTime":"2025-12-15T05:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.377541 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.377572 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.377583 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.377597 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.377607 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:23Z","lastTransitionTime":"2025-12-15T05:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.479285 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.479329 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.479340 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.479359 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.479373 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:23Z","lastTransitionTime":"2025-12-15T05:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.581663 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.581691 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.581702 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.581715 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.581725 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:23Z","lastTransitionTime":"2025-12-15T05:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.629196 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:38:23 crc kubenswrapper[4747]: E1215 05:38:23.629335 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.683558 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.683600 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.683612 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.683628 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.683640 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:23Z","lastTransitionTime":"2025-12-15T05:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.785731 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.785767 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.785779 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.785796 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.785815 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:23Z","lastTransitionTime":"2025-12-15T05:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.887866 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.887939 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.887952 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.887970 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.887986 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:23Z","lastTransitionTime":"2025-12-15T05:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.990183 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.990231 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.990241 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.990257 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:23 crc kubenswrapper[4747]: I1215 05:38:23.990270 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:23Z","lastTransitionTime":"2025-12-15T05:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:24 crc kubenswrapper[4747]: I1215 05:38:24.091852 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:24 crc kubenswrapper[4747]: I1215 05:38:24.091898 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:24 crc kubenswrapper[4747]: I1215 05:38:24.091909 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:24 crc kubenswrapper[4747]: I1215 05:38:24.091949 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:24 crc kubenswrapper[4747]: I1215 05:38:24.091968 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:24Z","lastTransitionTime":"2025-12-15T05:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:24 crc kubenswrapper[4747]: I1215 05:38:24.193448 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:24 crc kubenswrapper[4747]: I1215 05:38:24.193477 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:24 crc kubenswrapper[4747]: I1215 05:38:24.193487 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:24 crc kubenswrapper[4747]: I1215 05:38:24.193501 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:24 crc kubenswrapper[4747]: I1215 05:38:24.193512 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:24Z","lastTransitionTime":"2025-12-15T05:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:24 crc kubenswrapper[4747]: I1215 05:38:24.295106 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:24 crc kubenswrapper[4747]: I1215 05:38:24.295144 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:24 crc kubenswrapper[4747]: I1215 05:38:24.295157 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:24 crc kubenswrapper[4747]: I1215 05:38:24.295173 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:24 crc kubenswrapper[4747]: I1215 05:38:24.295185 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:24Z","lastTransitionTime":"2025-12-15T05:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:24 crc kubenswrapper[4747]: I1215 05:38:24.397166 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:24 crc kubenswrapper[4747]: I1215 05:38:24.397208 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:24 crc kubenswrapper[4747]: I1215 05:38:24.397222 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:24 crc kubenswrapper[4747]: I1215 05:38:24.397245 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:24 crc kubenswrapper[4747]: I1215 05:38:24.397263 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:24Z","lastTransitionTime":"2025-12-15T05:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:24 crc kubenswrapper[4747]: I1215 05:38:24.499349 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:24 crc kubenswrapper[4747]: I1215 05:38:24.499398 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:24 crc kubenswrapper[4747]: I1215 05:38:24.499411 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:24 crc kubenswrapper[4747]: I1215 05:38:24.499428 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:24 crc kubenswrapper[4747]: I1215 05:38:24.499441 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:24Z","lastTransitionTime":"2025-12-15T05:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:24 crc kubenswrapper[4747]: I1215 05:38:24.602099 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:24 crc kubenswrapper[4747]: I1215 05:38:24.602135 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:24 crc kubenswrapper[4747]: I1215 05:38:24.602149 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:24 crc kubenswrapper[4747]: I1215 05:38:24.602163 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:24 crc kubenswrapper[4747]: I1215 05:38:24.602172 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:24Z","lastTransitionTime":"2025-12-15T05:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:24 crc kubenswrapper[4747]: I1215 05:38:24.629134 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:38:24 crc kubenswrapper[4747]: I1215 05:38:24.629173 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:38:24 crc kubenswrapper[4747]: I1215 05:38:24.629131 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:38:24 crc kubenswrapper[4747]: E1215 05:38:24.629247 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:38:24 crc kubenswrapper[4747]: E1215 05:38:24.629384 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:38:24 crc kubenswrapper[4747]: E1215 05:38:24.629461 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:38:24 crc kubenswrapper[4747]: I1215 05:38:24.704737 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:24 crc kubenswrapper[4747]: I1215 05:38:24.704776 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:24 crc kubenswrapper[4747]: I1215 05:38:24.704787 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:24 crc kubenswrapper[4747]: I1215 05:38:24.704810 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:24 crc kubenswrapper[4747]: I1215 05:38:24.704822 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:24Z","lastTransitionTime":"2025-12-15T05:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:24 crc kubenswrapper[4747]: I1215 05:38:24.806793 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:24 crc kubenswrapper[4747]: I1215 05:38:24.806852 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:24 crc kubenswrapper[4747]: I1215 05:38:24.806866 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:24 crc kubenswrapper[4747]: I1215 05:38:24.806883 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:24 crc kubenswrapper[4747]: I1215 05:38:24.806891 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:24Z","lastTransitionTime":"2025-12-15T05:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:24 crc kubenswrapper[4747]: I1215 05:38:24.909212 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:24 crc kubenswrapper[4747]: I1215 05:38:24.909263 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:24 crc kubenswrapper[4747]: I1215 05:38:24.909274 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:24 crc kubenswrapper[4747]: I1215 05:38:24.909289 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:24 crc kubenswrapper[4747]: I1215 05:38:24.909299 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:24Z","lastTransitionTime":"2025-12-15T05:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.011713 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.011852 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.011920 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.012032 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.012114 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:25Z","lastTransitionTime":"2025-12-15T05:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.114081 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.114115 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.114143 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.114156 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.114164 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:25Z","lastTransitionTime":"2025-12-15T05:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.215965 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.216017 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.216031 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.216050 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.216065 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:25Z","lastTransitionTime":"2025-12-15T05:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.317836 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.317876 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.317884 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.317900 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.317912 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:25Z","lastTransitionTime":"2025-12-15T05:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.419959 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.420005 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.420017 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.420034 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.420046 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:25Z","lastTransitionTime":"2025-12-15T05:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.522092 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.522132 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.522144 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.522158 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.522170 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:25Z","lastTransitionTime":"2025-12-15T05:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.624624 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.624669 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.624695 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.624712 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.624723 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:25Z","lastTransitionTime":"2025-12-15T05:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.629016 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:38:25 crc kubenswrapper[4747]: E1215 05:38:25.629153 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.726113 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.726149 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.726160 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.726173 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.726182 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:25Z","lastTransitionTime":"2025-12-15T05:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.832502 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.832555 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.832576 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.832594 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.832608 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:25Z","lastTransitionTime":"2025-12-15T05:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.935192 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.935219 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.935229 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.935241 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:25 crc kubenswrapper[4747]: I1215 05:38:25.935252 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:25Z","lastTransitionTime":"2025-12-15T05:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.036949 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.036987 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.036997 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.037012 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.037023 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:26Z","lastTransitionTime":"2025-12-15T05:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.138887 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.138966 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.138978 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.138989 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.138998 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:26Z","lastTransitionTime":"2025-12-15T05:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.240451 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.240493 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.240505 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.240524 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.240537 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:26Z","lastTransitionTime":"2025-12-15T05:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.342890 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.342955 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.342968 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.342986 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.342999 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:26Z","lastTransitionTime":"2025-12-15T05:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.446005 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.446050 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.446062 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.446079 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.446089 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:26Z","lastTransitionTime":"2025-12-15T05:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.547675 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.547709 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.547720 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.547736 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.547748 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:26Z","lastTransitionTime":"2025-12-15T05:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.628642 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.628708 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.628715 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:38:26 crc kubenswrapper[4747]: E1215 05:38:26.628810 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:38:26 crc kubenswrapper[4747]: E1215 05:38:26.629049 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:38:26 crc kubenswrapper[4747]: E1215 05:38:26.629196 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.641076 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02b1ad69-88a4-40e6-a7c4-409b8d42f0cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317d78781624437d29c71b7ca4fa83e80bd20b5f05ab3ed6d1f7177abf0b77d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe7a2e5d3edede6651bef4bed5fb61df5e777cb2da607c135c9e529ad6d58fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://253522075c9e770e558c77f98b826cbb502f402d3806f640dc78d752baf574c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23a7828ef5239d7d64be1ee51a07c50e713808269c5ba2c94fde9e3a2470713a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:26Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.649007 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.649042 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.649053 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.649066 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.649321 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:26Z","lastTransitionTime":"2025-12-15T05:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.652680 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:26Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.662025 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82d2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9b71b51-500c-4932-b19a-559ec3d15a5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4bc1257d5ff3f04dcdce005fadc221c444afadbe862174495f6191537a58970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjjhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e44712601c304d309e50b76c61dba25a4fd6d982f6dd1df36fb046b0473bb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjjhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-82d2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:26Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.671954 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a223bb34540a7bc57028079a1b97da25865265d93931192b6be84a5625714aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:26Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.683276 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:26Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.694225 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pc5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39f3b6ddce69ca768ed97f2305771f13fdcc0608fd58b91ee8f9c000e6786af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pc5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:26Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.710964 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d50e5c9-7ce9-40c0-b942-01031654d27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3d9fff925cadcb3478997fe799c7fde5e75f670ad1309f5c6b15c3c534b673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8e211d6177b24044383a0cc22cceb80f6442489a5d2b4adbafc3d36637b3a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nldtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:26Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.724619 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72337c4b670370cacb09ca1e4d1dc3de04c6106c45141c3a6fb1b89541dddc93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72337c4b670370cacb09ca1e4d1dc3de04c6106c45141c3a6fb1b89541dddc93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"message\\\":\\\"reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1215 05:37:57.248195 6397 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1215 05:37:57.248522 6397 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1215 05:37:57.254515 6397 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1215 05:37:57.254583 6397 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1215 05:37:57.254642 6397 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1215 05:37:57.254715 6397 handler.go:208] Removed *v1.Node event handler 2\\\\nI1215 05:37:57.254743 6397 factory.go:656] Stopping watch factory\\\\nI1215 05:37:57.286414 6397 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1215 05:37:57.286444 6397 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1215 05:37:57.286715 6397 ovnkube.go:599] Stopped ovnkube\\\\nI1215 05:37:57.286741 6397 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1215 05:37:57.286813 6397 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-82lhw_openshift-ovn-kubernetes(2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-82lhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:26Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.733824 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7ba935d-4d45-497f-a710-482288987eb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d682a9462fba61e03c438d541888778564c5f9614b20ae3415d06039a1b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://918387852c8b6a10cbef90523b68f21472cb57394fe3107fb6a96ac8e76ada07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ff0666091801d67feef4ab5998d6a9c037afa1781db60c2f67046f3ec99a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bc359099b8d59b6ef6bdf25fa9b5b30e446e9897cfb56b16408dab6a88f8c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bc359099b8d59b6ef6bdf25fa9b5b30e446e9897cfb56b16408dab6a88f8c54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:26Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.743036 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:26Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.750671 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cltgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3c83e90-bb8c-4909-9633-8f59ca12db6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5babf7321502d73aabf3a859529005a01ff938a5bdb4032d6eee6ace84be575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg2m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cltgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:26Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.751315 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.751451 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.751525 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.751624 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.751684 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:26Z","lastTransitionTime":"2025-12-15T05:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.758794 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p2w9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abdca76-2fcd-44fc-a09d-ded3084306d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e6c91da690038d3b65310ea18371acab81ebb6542f2072f4ac4e08f74f04de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fw7nq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p2w9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:26Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.767995 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmfps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89350c5d-9a77-499e-81ec-376b012cc219\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7e29913438085594b529ef0499bebcb5d59f0027e5c46d493eb0316c2c553c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31dada17d2bfd9fb0e40e0e67d3d41379a6dabe9f1a7db2b2367a66d120d7e0d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-15T05:38:20Z\\\",\\\"message\\\":\\\"2025-12-15T05:37:35+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a6c939b8-0d27-4921-8e30-40e8be11eed4\\\\n2025-12-15T05:37:35+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a6c939b8-0d27-4921-8e30-40e8be11eed4 to /host/opt/cni/bin/\\\\n2025-12-15T05:37:35Z [verbose] multus-daemon started\\\\n2025-12-15T05:37:35Z [verbose] Readiness Indicator file check\\\\n2025-12-15T05:38:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:38:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbpgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmfps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:26Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.775647 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4nn8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca0b2d2-cd19-409a-aa6d-df8b295adf62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bn8pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bn8pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4nn8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:26Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.787142 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31db1d28-81c8-4eae-989c-49168bd4e711\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f0ff77c32912e767410211079dffc4e0681cf67e040e2cff227825813908d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://675507d9779ef73f4e25201102562a79a43654bce4a6e3b363f03e0e0b697578\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-15T05:37:33Z\\\",\\\"message\\\":\\\"le observer\\\\nW1215 05:37:33.642096 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1215 05:37:33.642525 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1215 05:37:33.645063 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1979444596/tls.crt::/tmp/serving-cert-1979444596/tls.key\\\\\\\"\\\\nI1215 05:37:33.801352 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1215 05:37:33.804097 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1215 05:37:33.804116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1215 05:37:33.804137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1215 05:37:33.804142 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1215 05:37:33.809816 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1215 05:37:33.809843 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1215 05:37:33.809857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1215 05:37:33.809860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1215 05:37:33.809862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1215 05:37:33.810091 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1215 05:37:33.812167 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:26Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.798125 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8b29ca57cdee8b65a3f3a0536e36f0e7e4666850221e623ca0f8186fd18e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:26Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.807016 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://582ccb05087bcbafc98ab0ee2114e9045b2ff301e10ba99cf8ddcd285cae9d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a10d569d887d2d91a5cb36b7193d4bfecf6a3177e238396b8863806aacf9df19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:26Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.853780 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.853860 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.853872 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.853892 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.853904 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:26Z","lastTransitionTime":"2025-12-15T05:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.955789 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.955844 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.955858 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.955877 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:26 crc kubenswrapper[4747]: I1215 05:38:26.955892 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:26Z","lastTransitionTime":"2025-12-15T05:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.057703 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.057749 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.057760 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.057777 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.057790 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:27Z","lastTransitionTime":"2025-12-15T05:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.160378 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.160413 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.160424 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.160441 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.160453 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:27Z","lastTransitionTime":"2025-12-15T05:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.262567 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.262594 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.262603 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.262619 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.262634 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:27Z","lastTransitionTime":"2025-12-15T05:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.364431 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.364485 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.364498 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.364518 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.364530 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:27Z","lastTransitionTime":"2025-12-15T05:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.466381 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.466451 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.466463 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.466483 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.466497 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:27Z","lastTransitionTime":"2025-12-15T05:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.568372 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.568415 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.568428 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.568444 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.568457 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:27Z","lastTransitionTime":"2025-12-15T05:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.628730 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:38:27 crc kubenswrapper[4747]: E1215 05:38:27.628874 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.670857 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.670915 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.670948 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.670973 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.670987 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:27Z","lastTransitionTime":"2025-12-15T05:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.772688 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.772721 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.772732 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.772745 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.772757 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:27Z","lastTransitionTime":"2025-12-15T05:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.874764 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.874789 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.874807 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.874818 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.874830 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:27Z","lastTransitionTime":"2025-12-15T05:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.976224 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.976261 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.976273 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.976288 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:27 crc kubenswrapper[4747]: I1215 05:38:27.976298 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:27Z","lastTransitionTime":"2025-12-15T05:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.077517 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.077551 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.077561 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.077575 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.077585 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:28Z","lastTransitionTime":"2025-12-15T05:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.143835 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.143889 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.143900 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.143916 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.143961 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:28Z","lastTransitionTime":"2025-12-15T05:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:28 crc kubenswrapper[4747]: E1215 05:38:28.155309 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4ef03cda-5cb6-4966-bf0f-23d213ae8ebc\\\",\\\"systemUUID\\\":\\\"f6e6c4c5-517c-43b9-abbe-241c399d7f32\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:28Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.157996 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.158035 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.158044 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.158065 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.158079 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:28Z","lastTransitionTime":"2025-12-15T05:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:28 crc kubenswrapper[4747]: E1215 05:38:28.167834 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4ef03cda-5cb6-4966-bf0f-23d213ae8ebc\\\",\\\"systemUUID\\\":\\\"f6e6c4c5-517c-43b9-abbe-241c399d7f32\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:28Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.171198 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.171234 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.171245 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.171262 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.171275 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:28Z","lastTransitionTime":"2025-12-15T05:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:28 crc kubenswrapper[4747]: E1215 05:38:28.180457 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4ef03cda-5cb6-4966-bf0f-23d213ae8ebc\\\",\\\"systemUUID\\\":\\\"f6e6c4c5-517c-43b9-abbe-241c399d7f32\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:28Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.182856 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.182878 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.182888 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.182897 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.182906 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:28Z","lastTransitionTime":"2025-12-15T05:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:28 crc kubenswrapper[4747]: E1215 05:38:28.196087 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4ef03cda-5cb6-4966-bf0f-23d213ae8ebc\\\",\\\"systemUUID\\\":\\\"f6e6c4c5-517c-43b9-abbe-241c399d7f32\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:28Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.198524 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.198555 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.198563 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.198574 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.198581 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:28Z","lastTransitionTime":"2025-12-15T05:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:28 crc kubenswrapper[4747]: E1215 05:38:28.207867 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4ef03cda-5cb6-4966-bf0f-23d213ae8ebc\\\",\\\"systemUUID\\\":\\\"f6e6c4c5-517c-43b9-abbe-241c399d7f32\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:28Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:28 crc kubenswrapper[4747]: E1215 05:38:28.208010 4747 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.209089 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.209117 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.209128 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.209141 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.209150 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:28Z","lastTransitionTime":"2025-12-15T05:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.311082 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.311110 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.311120 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.311131 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.311138 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:28Z","lastTransitionTime":"2025-12-15T05:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.412873 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.412913 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.412949 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.412965 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.412975 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:28Z","lastTransitionTime":"2025-12-15T05:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.520668 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.520730 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.520740 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.520756 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.520767 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:28Z","lastTransitionTime":"2025-12-15T05:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.623325 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.623359 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.623380 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.623396 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.623409 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:28Z","lastTransitionTime":"2025-12-15T05:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.628620 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:38:28 crc kubenswrapper[4747]: E1215 05:38:28.628714 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.628899 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.628946 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:38:28 crc kubenswrapper[4747]: E1215 05:38:28.629131 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:38:28 crc kubenswrapper[4747]: E1215 05:38:28.629156 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.629690 4747 scope.go:117] "RemoveContainer" containerID="72337c4b670370cacb09ca1e4d1dc3de04c6106c45141c3a6fb1b89541dddc93" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.726047 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.726283 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.726296 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.726307 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.726317 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:28Z","lastTransitionTime":"2025-12-15T05:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.827773 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.827814 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.827825 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.827840 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.827852 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:28Z","lastTransitionTime":"2025-12-15T05:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.930384 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.930427 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.930437 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.930453 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.930475 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:28Z","lastTransitionTime":"2025-12-15T05:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.973855 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-82lhw_2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7/ovnkube-controller/2.log" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.977154 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" event={"ID":"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7","Type":"ContainerStarted","Data":"312af0941768922b2c2c313a7209f9271ef4df55fe0be3baa0ebc0d91637b955"} Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.977582 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:38:28 crc kubenswrapper[4747]: I1215 05:38:28.997387 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://582ccb05087bcbafc98ab0ee2114e9045b2ff301e10ba99cf8ddcd285cae9d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a10d569d887d2d91a5cb36b7193d4bfecf6a3177e238396b8863806aacf9df19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:28Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.016304 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cltgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3c83e90-bb8c-4909-9633-8f59ca12db6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5babf7321502d73aabf3a859529005a01ff938a5bdb4032d6eee6ace84be575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg2m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cltgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:29Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.032996 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.033021 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.033030 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.033042 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.033051 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:29Z","lastTransitionTime":"2025-12-15T05:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.033438 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p2w9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abdca76-2fcd-44fc-a09d-ded3084306d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e6c91da690038d3b65310ea18371acab81ebb6542f2072f4ac4e08f74f04de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fw7nq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p2w9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:29Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.042608 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmfps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89350c5d-9a77-499e-81ec-376b012cc219\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7e29913438085594b529ef0499bebcb5d59f0027e5c46d493eb0316c2c553c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31dada17d2bfd9fb0e40e0e67d3d41379a6dabe9f1a7db2b2367a66d120d7e0d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-15T05:38:20Z\\\",\\\"message\\\":\\\"2025-12-15T05:37:35+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a6c939b8-0d27-4921-8e30-40e8be11eed4\\\\n2025-12-15T05:37:35+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a6c939b8-0d27-4921-8e30-40e8be11eed4 to /host/opt/cni/bin/\\\\n2025-12-15T05:37:35Z [verbose] multus-daemon started\\\\n2025-12-15T05:37:35Z [verbose] Readiness Indicator file check\\\\n2025-12-15T05:38:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:38:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbpgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmfps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:29Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.051298 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4nn8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca0b2d2-cd19-409a-aa6d-df8b295adf62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bn8pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bn8pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4nn8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:29Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.060591 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31db1d28-81c8-4eae-989c-49168bd4e711\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f0ff77c32912e767410211079dffc4e0681cf67e040e2cff227825813908d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://675507d9779ef73f4e25201102562a79a43654bce4a6e3b363f03e0e0b697578\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-15T05:37:33Z\\\",\\\"message\\\":\\\"le observer\\\\nW1215 05:37:33.642096 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1215 05:37:33.642525 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1215 05:37:33.645063 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1979444596/tls.crt::/tmp/serving-cert-1979444596/tls.key\\\\\\\"\\\\nI1215 05:37:33.801352 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1215 05:37:33.804097 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1215 05:37:33.804116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1215 05:37:33.804137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1215 05:37:33.804142 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1215 05:37:33.809816 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1215 05:37:33.809843 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1215 05:37:33.809857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1215 05:37:33.809860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1215 05:37:33.809862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1215 05:37:33.810091 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1215 05:37:33.812167 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:29Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.070342 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8b29ca57cdee8b65a3f3a0536e36f0e7e4666850221e623ca0f8186fd18e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:29Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.077609 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82d2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9b71b51-500c-4932-b19a-559ec3d15a5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4bc1257d5ff3f04dcdce005fadc221c444afadbe862174495f6191537a58970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjjhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e44712601c304d309e50b76c61dba25a4fd6d982f6dd1df36fb046b0473bb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjjhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-82d2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:29Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.087370 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02b1ad69-88a4-40e6-a7c4-409b8d42f0cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317d78781624437d29c71b7ca4fa83e80bd20b5f05ab3ed6d1f7177abf0b77d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe7a2e5d3edede6651bef4bed5fb61df5e777cb2da607c135c9e529ad6d58fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://253522075c9e770e558c77f98b826cbb502f402d3806f640dc78d752baf574c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23a7828ef5239d7d64be1ee51a07c50e713808269c5ba2c94fde9e3a2470713a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:29Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.097720 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:29Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.109029 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a223bb34540a7bc57028079a1b97da25865265d93931192b6be84a5625714aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:29Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.119706 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:29Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.130575 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:29Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.135095 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.135118 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.135128 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.135142 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.135153 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:29Z","lastTransitionTime":"2025-12-15T05:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.145180 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pc5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39f3b6ddce69ca768ed97f2305771f13fdcc0608fd58b91ee8f9c000e6786af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pc5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:29Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.168206 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d50e5c9-7ce9-40c0-b942-01031654d27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3d9fff925cadcb3478997fe799c7fde5e75f670ad1309f5c6b15c3c534b673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8e211d6177b24044383a0cc22cceb80f6442489a5d2b4adbafc3d36637b3a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nldtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:29Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.198519 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312af0941768922b2c2c313a7209f9271ef4df55fe0be3baa0ebc0d91637b955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72337c4b670370cacb09ca1e4d1dc3de04c6106c45141c3a6fb1b89541dddc93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"message\\\":\\\"reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1215 05:37:57.248195 6397 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1215 05:37:57.248522 6397 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1215 05:37:57.254515 6397 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1215 05:37:57.254583 6397 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1215 05:37:57.254642 6397 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1215 05:37:57.254715 6397 handler.go:208] Removed *v1.Node event handler 2\\\\nI1215 05:37:57.254743 6397 factory.go:656] Stopping watch factory\\\\nI1215 05:37:57.286414 6397 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1215 05:37:57.286444 6397 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1215 05:37:57.286715 6397 ovnkube.go:599] Stopped ovnkube\\\\nI1215 05:37:57.286741 6397 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1215 05:37:57.286813 6397 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:38:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-82lhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:29Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.218575 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7ba935d-4d45-497f-a710-482288987eb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d682a9462fba61e03c438d541888778564c5f9614b20ae3415d06039a1b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://918387852c8b6a10cbef90523b68f21472cb57394fe3107fb6a96ac8e76ada07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ff0666091801d67feef4ab5998d6a9c037afa1781db60c2f67046f3ec99a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bc359099b8d59b6ef6bdf25fa9b5b30e446e9897cfb56b16408dab6a88f8c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bc359099b8d59b6ef6bdf25fa9b5b30e446e9897cfb56b16408dab6a88f8c54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:29Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.237237 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.237274 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.237286 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.237303 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.237315 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:29Z","lastTransitionTime":"2025-12-15T05:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.339765 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.339819 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.339828 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.339845 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.339858 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:29Z","lastTransitionTime":"2025-12-15T05:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.442421 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.442464 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.442478 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.442499 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.442512 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:29Z","lastTransitionTime":"2025-12-15T05:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.544751 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.544803 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.544815 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.544830 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.544841 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:29Z","lastTransitionTime":"2025-12-15T05:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.628168 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:38:29 crc kubenswrapper[4747]: E1215 05:38:29.628297 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.646481 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.646526 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.646538 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.646557 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.646572 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:29Z","lastTransitionTime":"2025-12-15T05:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.748105 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.748137 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.748147 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.748159 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.748169 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:29Z","lastTransitionTime":"2025-12-15T05:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.850855 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.850898 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.850908 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.850948 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.850961 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:29Z","lastTransitionTime":"2025-12-15T05:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.953184 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.953221 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.953232 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.953248 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.953258 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:29Z","lastTransitionTime":"2025-12-15T05:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.981514 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-82lhw_2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7/ovnkube-controller/3.log" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.982021 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-82lhw_2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7/ovnkube-controller/2.log" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.984564 4747 generic.go:334] "Generic (PLEG): container finished" podID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerID="312af0941768922b2c2c313a7209f9271ef4df55fe0be3baa0ebc0d91637b955" exitCode=1 Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.984614 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" event={"ID":"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7","Type":"ContainerDied","Data":"312af0941768922b2c2c313a7209f9271ef4df55fe0be3baa0ebc0d91637b955"} Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.984666 4747 scope.go:117] "RemoveContainer" containerID="72337c4b670370cacb09ca1e4d1dc3de04c6106c45141c3a6fb1b89541dddc93" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.986099 4747 scope.go:117] "RemoveContainer" containerID="312af0941768922b2c2c313a7209f9271ef4df55fe0be3baa0ebc0d91637b955" Dec 15 05:38:29 crc kubenswrapper[4747]: E1215 05:38:29.986303 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-82lhw_openshift-ovn-kubernetes(2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" Dec 15 05:38:29 crc kubenswrapper[4747]: I1215 05:38:29.997380 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a223bb34540a7bc57028079a1b97da25865265d93931192b6be84a5625714aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:29Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.008874 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pc5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39f3b6ddce69ca768ed97f2305771f13fdcc0608fd58b91ee8f9c000e6786af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pc5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:30Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.018215 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d50e5c9-7ce9-40c0-b942-01031654d27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3d9fff925cadcb3478997fe799c7fde5e75f670ad1309f5c6b15c3c534b673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8e211d6177b24044383a0cc22cceb80f6442489a5d2b4adbafc3d36637b3a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nldtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:30Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.033677 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312af0941768922b2c2c313a7209f9271ef4df55fe0be3baa0ebc0d91637b955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72337c4b670370cacb09ca1e4d1dc3de04c6106c45141c3a6fb1b89541dddc93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-15T05:37:57Z\\\",\\\"message\\\":\\\"reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1215 05:37:57.248195 6397 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1215 05:37:57.248522 6397 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1215 05:37:57.254515 6397 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1215 05:37:57.254583 6397 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1215 05:37:57.254642 6397 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1215 05:37:57.254715 6397 handler.go:208] Removed *v1.Node event handler 2\\\\nI1215 05:37:57.254743 6397 factory.go:656] Stopping watch factory\\\\nI1215 05:37:57.286414 6397 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1215 05:37:57.286444 6397 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1215 05:37:57.286715 6397 ovnkube.go:599] Stopped ovnkube\\\\nI1215 05:37:57.286741 6397 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1215 05:37:57.286813 6397 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://312af0941768922b2c2c313a7209f9271ef4df55fe0be3baa0ebc0d91637b955\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-15T05:38:29Z\\\",\\\"message\\\":\\\"oadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1215 05:38:29.496536 6828 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1215 05:38:29.496184 6828 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-nldtn\\\\nI1215 05:38:29.496566 6828 lb_config.go:1031] Cluster endpoints for openshift-operator-lifecycle-manager/olm-operator-metrics for network=default are: map[]\\\\nI1215 05:38:29.496141 6828 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-4nn8g\\\\nI1215 05:38:29.496766 6828 services_controller.go:443] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.168\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI1215 05:38:29.496231 6828 obj_retry.go:386] Retry successful for *v1.Pod open\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:38:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-82lhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:30Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.042612 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7ba935d-4d45-497f-a710-482288987eb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d682a9462fba61e03c438d541888778564c5f9614b20ae3415d06039a1b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://918387852c8b6a10cbef90523b68f21472cb57394fe3107fb6a96ac8e76ada07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ff0666091801d67feef4ab5998d6a9c037afa1781db60c2f67046f3ec99a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bc359099b8d59b6ef6bdf25fa9b5b30e446e9897cfb56b16408dab6a88f8c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bc359099b8d59b6ef6bdf25fa9b5b30e446e9897cfb56b16408dab6a88f8c54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:30Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.051361 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:30Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.055263 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.055300 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.055313 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.055334 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.055357 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:30Z","lastTransitionTime":"2025-12-15T05:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.061232 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:30Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.069087 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p2w9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abdca76-2fcd-44fc-a09d-ded3084306d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e6c91da690038d3b65310ea18371acab81ebb6542f2072f4ac4e08f74f04de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fw7nq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p2w9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:30Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.080052 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmfps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89350c5d-9a77-499e-81ec-376b012cc219\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7e29913438085594b529ef0499bebcb5d59f0027e5c46d493eb0316c2c553c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31dada17d2bfd9fb0e40e0e67d3d41379a6dabe9f1a7db2b2367a66d120d7e0d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-15T05:38:20Z\\\",\\\"message\\\":\\\"2025-12-15T05:37:35+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a6c939b8-0d27-4921-8e30-40e8be11eed4\\\\n2025-12-15T05:37:35+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a6c939b8-0d27-4921-8e30-40e8be11eed4 to /host/opt/cni/bin/\\\\n2025-12-15T05:37:35Z [verbose] multus-daemon started\\\\n2025-12-15T05:37:35Z [verbose] Readiness Indicator file check\\\\n2025-12-15T05:38:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:38:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbpgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmfps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:30Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.088412 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4nn8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca0b2d2-cd19-409a-aa6d-df8b295adf62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bn8pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bn8pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4nn8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:30Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.098487 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31db1d28-81c8-4eae-989c-49168bd4e711\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f0ff77c32912e767410211079dffc4e0681cf67e040e2cff227825813908d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://675507d9779ef73f4e25201102562a79a43654bce4a6e3b363f03e0e0b697578\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-15T05:37:33Z\\\",\\\"message\\\":\\\"le observer\\\\nW1215 05:37:33.642096 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1215 05:37:33.642525 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1215 05:37:33.645063 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1979444596/tls.crt::/tmp/serving-cert-1979444596/tls.key\\\\\\\"\\\\nI1215 05:37:33.801352 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1215 05:37:33.804097 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1215 05:37:33.804116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1215 05:37:33.804137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1215 05:37:33.804142 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1215 05:37:33.809816 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1215 05:37:33.809843 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1215 05:37:33.809857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1215 05:37:33.809860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1215 05:37:33.809862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1215 05:37:33.810091 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1215 05:37:33.812167 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:30Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.108293 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8b29ca57cdee8b65a3f3a0536e36f0e7e4666850221e623ca0f8186fd18e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:30Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.117264 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://582ccb05087bcbafc98ab0ee2114e9045b2ff301e10ba99cf8ddcd285cae9d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a10d569d887d2d91a5cb36b7193d4bfecf6a3177e238396b8863806aacf9df19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:30Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.125727 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cltgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3c83e90-bb8c-4909-9633-8f59ca12db6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5babf7321502d73aabf3a859529005a01ff938a5bdb4032d6eee6ace84be575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg2m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cltgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:30Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.135117 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02b1ad69-88a4-40e6-a7c4-409b8d42f0cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317d78781624437d29c71b7ca4fa83e80bd20b5f05ab3ed6d1f7177abf0b77d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe7a2e5d3edede6651bef4bed5fb61df5e777cb2da607c135c9e529ad6d58fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://253522075c9e770e558c77f98b826cbb502f402d3806f640dc78d752baf574c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23a7828ef5239d7d64be1ee51a07c50e713808269c5ba2c94fde9e3a2470713a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:30Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.144158 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:30Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.152751 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82d2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9b71b51-500c-4932-b19a-559ec3d15a5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4bc1257d5ff3f04dcdce005fadc221c444afadbe862174495f6191537a58970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjjhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e44712601c304d309e50b76c61dba25a4fd6d982f6dd1df36fb046b0473bb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjjhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-82d2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:30Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.157204 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.157240 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.157250 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.157265 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.157279 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:30Z","lastTransitionTime":"2025-12-15T05:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.258990 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.259028 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.259038 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.259052 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.259065 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:30Z","lastTransitionTime":"2025-12-15T05:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.361382 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.361419 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.361429 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.361447 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.361458 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:30Z","lastTransitionTime":"2025-12-15T05:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.462730 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.462770 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.462779 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.462806 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.462814 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:30Z","lastTransitionTime":"2025-12-15T05:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.565188 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.565250 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.565262 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.565280 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.565292 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:30Z","lastTransitionTime":"2025-12-15T05:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.629115 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.629155 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.629165 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:38:30 crc kubenswrapper[4747]: E1215 05:38:30.629272 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:38:30 crc kubenswrapper[4747]: E1215 05:38:30.629359 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:38:30 crc kubenswrapper[4747]: E1215 05:38:30.629535 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.667630 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.667686 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.667699 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.667712 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.667723 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:30Z","lastTransitionTime":"2025-12-15T05:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.769739 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.769777 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.769802 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.769816 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.769828 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:30Z","lastTransitionTime":"2025-12-15T05:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.872061 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.872110 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.872121 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.872136 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.872151 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:30Z","lastTransitionTime":"2025-12-15T05:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.974363 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.974397 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.974407 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.974418 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.974431 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:30Z","lastTransitionTime":"2025-12-15T05:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.990145 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-82lhw_2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7/ovnkube-controller/3.log" Dec 15 05:38:30 crc kubenswrapper[4747]: I1215 05:38:30.993767 4747 scope.go:117] "RemoveContainer" containerID="312af0941768922b2c2c313a7209f9271ef4df55fe0be3baa0ebc0d91637b955" Dec 15 05:38:30 crc kubenswrapper[4747]: E1215 05:38:30.994109 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-82lhw_openshift-ovn-kubernetes(2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.013826 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p2w9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abdca76-2fcd-44fc-a09d-ded3084306d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e6c91da690038d3b65310ea18371acab81ebb6542f2072f4ac4e08f74f04de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fw7nq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p2w9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:31Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.025708 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmfps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89350c5d-9a77-499e-81ec-376b012cc219\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7e29913438085594b529ef0499bebcb5d59f0027e5c46d493eb0316c2c553c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31dada17d2bfd9fb0e40e0e67d3d41379a6dabe9f1a7db2b2367a66d120d7e0d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-15T05:38:20Z\\\",\\\"message\\\":\\\"2025-12-15T05:37:35+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a6c939b8-0d27-4921-8e30-40e8be11eed4\\\\n2025-12-15T05:37:35+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a6c939b8-0d27-4921-8e30-40e8be11eed4 to /host/opt/cni/bin/\\\\n2025-12-15T05:37:35Z [verbose] multus-daemon started\\\\n2025-12-15T05:37:35Z [verbose] Readiness Indicator file check\\\\n2025-12-15T05:38:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:38:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbpgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmfps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:31Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.034745 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4nn8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca0b2d2-cd19-409a-aa6d-df8b295adf62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bn8pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bn8pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4nn8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:31Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.046460 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31db1d28-81c8-4eae-989c-49168bd4e711\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f0ff77c32912e767410211079dffc4e0681cf67e040e2cff227825813908d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://675507d9779ef73f4e25201102562a79a43654bce4a6e3b363f03e0e0b697578\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-15T05:37:33Z\\\",\\\"message\\\":\\\"le observer\\\\nW1215 05:37:33.642096 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1215 05:37:33.642525 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1215 05:37:33.645063 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1979444596/tls.crt::/tmp/serving-cert-1979444596/tls.key\\\\\\\"\\\\nI1215 05:37:33.801352 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1215 05:37:33.804097 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1215 05:37:33.804116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1215 05:37:33.804137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1215 05:37:33.804142 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1215 05:37:33.809816 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1215 05:37:33.809843 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1215 05:37:33.809857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1215 05:37:33.809860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1215 05:37:33.809862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1215 05:37:33.810091 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1215 05:37:33.812167 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:31Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.056683 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8b29ca57cdee8b65a3f3a0536e36f0e7e4666850221e623ca0f8186fd18e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:31Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.067825 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://582ccb05087bcbafc98ab0ee2114e9045b2ff301e10ba99cf8ddcd285cae9d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a10d569d887d2d91a5cb36b7193d4bfecf6a3177e238396b8863806aacf9df19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:31Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.076393 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.076438 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.076453 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.076469 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.076460 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cltgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3c83e90-bb8c-4909-9633-8f59ca12db6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5babf7321502d73aabf3a859529005a01ff938a5bdb4032d6eee6ace84be575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg2m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cltgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:31Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.076479 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:31Z","lastTransitionTime":"2025-12-15T05:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.086844 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02b1ad69-88a4-40e6-a7c4-409b8d42f0cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317d78781624437d29c71b7ca4fa83e80bd20b5f05ab3ed6d1f7177abf0b77d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe7a2e5d3edede6651bef4bed5fb61df5e777cb2da607c135c9e529ad6d58fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://253522075c9e770e558c77f98b826cbb502f402d3806f640dc78d752baf574c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23a7828ef5239d7d64be1ee51a07c50e713808269c5ba2c94fde9e3a2470713a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:31Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.096094 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:31Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.104141 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82d2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9b71b51-500c-4932-b19a-559ec3d15a5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4bc1257d5ff3f04dcdce005fadc221c444afadbe862174495f6191537a58970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjjhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e44712601c304d309e50b76c61dba25a4fd6d982f6dd1df36fb046b0473bb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjjhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-82d2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:31Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.112967 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a223bb34540a7bc57028079a1b97da25865265d93931192b6be84a5625714aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:31Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.124259 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pc5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39f3b6ddce69ca768ed97f2305771f13fdcc0608fd58b91ee8f9c000e6786af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pc5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:31Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.132198 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d50e5c9-7ce9-40c0-b942-01031654d27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3d9fff925cadcb3478997fe799c7fde5e75f670ad1309f5c6b15c3c534b673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8e211d6177b24044383a0cc22cceb80f6442489a5d2b4adbafc3d36637b3a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nldtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:31Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.149241 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312af0941768922b2c2c313a7209f9271ef4df55fe0be3baa0ebc0d91637b955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://312af0941768922b2c2c313a7209f9271ef4df55fe0be3baa0ebc0d91637b955\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-15T05:38:29Z\\\",\\\"message\\\":\\\"oadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1215 05:38:29.496536 6828 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1215 05:38:29.496184 6828 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-nldtn\\\\nI1215 05:38:29.496566 6828 lb_config.go:1031] Cluster endpoints for openshift-operator-lifecycle-manager/olm-operator-metrics for network=default are: map[]\\\\nI1215 05:38:29.496141 6828 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-4nn8g\\\\nI1215 05:38:29.496766 6828 services_controller.go:443] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.168\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI1215 05:38:29.496231 6828 obj_retry.go:386] Retry successful for *v1.Pod open\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:38:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-82lhw_openshift-ovn-kubernetes(2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-82lhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:31Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.158171 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7ba935d-4d45-497f-a710-482288987eb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d682a9462fba61e03c438d541888778564c5f9614b20ae3415d06039a1b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://918387852c8b6a10cbef90523b68f21472cb57394fe3107fb6a96ac8e76ada07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ff0666091801d67feef4ab5998d6a9c037afa1781db60c2f67046f3ec99a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bc359099b8d59b6ef6bdf25fa9b5b30e446e9897cfb56b16408dab6a88f8c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bc359099b8d59b6ef6bdf25fa9b5b30e446e9897cfb56b16408dab6a88f8c54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:31Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.167174 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:31Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.175848 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:31Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.178305 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.178342 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.178354 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.178375 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.178385 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:31Z","lastTransitionTime":"2025-12-15T05:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.280866 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.280920 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.280955 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.280981 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.280995 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:31Z","lastTransitionTime":"2025-12-15T05:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.384445 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.384486 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.384497 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.384509 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.384525 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:31Z","lastTransitionTime":"2025-12-15T05:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.486862 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.486889 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.486898 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.486911 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.486920 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:31Z","lastTransitionTime":"2025-12-15T05:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.588861 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.588906 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.588916 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.588988 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.589004 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:31Z","lastTransitionTime":"2025-12-15T05:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.628747 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:38:31 crc kubenswrapper[4747]: E1215 05:38:31.628874 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.690592 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.690631 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.690642 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.690661 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.690672 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:31Z","lastTransitionTime":"2025-12-15T05:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.793059 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.793114 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.793126 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.793142 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.793153 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:31Z","lastTransitionTime":"2025-12-15T05:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.895198 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.895246 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.895257 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.895270 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.895278 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:31Z","lastTransitionTime":"2025-12-15T05:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.996680 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.996722 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.996733 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.996748 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:31 crc kubenswrapper[4747]: I1215 05:38:31.996760 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:31Z","lastTransitionTime":"2025-12-15T05:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:32 crc kubenswrapper[4747]: I1215 05:38:32.099154 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:32 crc kubenswrapper[4747]: I1215 05:38:32.099204 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:32 crc kubenswrapper[4747]: I1215 05:38:32.099215 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:32 crc kubenswrapper[4747]: I1215 05:38:32.099231 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:32 crc kubenswrapper[4747]: I1215 05:38:32.099262 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:32Z","lastTransitionTime":"2025-12-15T05:38:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:32 crc kubenswrapper[4747]: I1215 05:38:32.201761 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:32 crc kubenswrapper[4747]: I1215 05:38:32.201817 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:32 crc kubenswrapper[4747]: I1215 05:38:32.201831 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:32 crc kubenswrapper[4747]: I1215 05:38:32.201855 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:32 crc kubenswrapper[4747]: I1215 05:38:32.201867 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:32Z","lastTransitionTime":"2025-12-15T05:38:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:32 crc kubenswrapper[4747]: I1215 05:38:32.303714 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:32 crc kubenswrapper[4747]: I1215 05:38:32.303751 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:32 crc kubenswrapper[4747]: I1215 05:38:32.303762 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:32 crc kubenswrapper[4747]: I1215 05:38:32.303774 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:32 crc kubenswrapper[4747]: I1215 05:38:32.303791 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:32Z","lastTransitionTime":"2025-12-15T05:38:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:32 crc kubenswrapper[4747]: I1215 05:38:32.406020 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:32 crc kubenswrapper[4747]: I1215 05:38:32.406056 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:32 crc kubenswrapper[4747]: I1215 05:38:32.406066 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:32 crc kubenswrapper[4747]: I1215 05:38:32.406076 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:32 crc kubenswrapper[4747]: I1215 05:38:32.406086 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:32Z","lastTransitionTime":"2025-12-15T05:38:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:32 crc kubenswrapper[4747]: I1215 05:38:32.508233 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:32 crc kubenswrapper[4747]: I1215 05:38:32.508280 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:32 crc kubenswrapper[4747]: I1215 05:38:32.508293 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:32 crc kubenswrapper[4747]: I1215 05:38:32.508308 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:32 crc kubenswrapper[4747]: I1215 05:38:32.508319 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:32Z","lastTransitionTime":"2025-12-15T05:38:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:32 crc kubenswrapper[4747]: I1215 05:38:32.609889 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:32 crc kubenswrapper[4747]: I1215 05:38:32.609966 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:32 crc kubenswrapper[4747]: I1215 05:38:32.609979 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:32 crc kubenswrapper[4747]: I1215 05:38:32.609996 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:32 crc kubenswrapper[4747]: I1215 05:38:32.610009 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:32Z","lastTransitionTime":"2025-12-15T05:38:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:32 crc kubenswrapper[4747]: I1215 05:38:32.628412 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:38:32 crc kubenswrapper[4747]: I1215 05:38:32.628485 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:38:32 crc kubenswrapper[4747]: E1215 05:38:32.628551 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:38:32 crc kubenswrapper[4747]: I1215 05:38:32.628633 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:38:32 crc kubenswrapper[4747]: E1215 05:38:32.628732 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:38:32 crc kubenswrapper[4747]: E1215 05:38:32.628837 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:38:32 crc kubenswrapper[4747]: I1215 05:38:32.711853 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:32 crc kubenswrapper[4747]: I1215 05:38:32.711891 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:32 crc kubenswrapper[4747]: I1215 05:38:32.711902 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:32 crc kubenswrapper[4747]: I1215 05:38:32.711915 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:32 crc kubenswrapper[4747]: I1215 05:38:32.711942 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:32Z","lastTransitionTime":"2025-12-15T05:38:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:32 crc kubenswrapper[4747]: I1215 05:38:32.818689 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:32 crc kubenswrapper[4747]: I1215 05:38:32.818875 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:32 crc kubenswrapper[4747]: I1215 05:38:32.818893 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:32 crc kubenswrapper[4747]: I1215 05:38:32.819256 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:32 crc kubenswrapper[4747]: I1215 05:38:32.819270 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:32Z","lastTransitionTime":"2025-12-15T05:38:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:32 crc kubenswrapper[4747]: I1215 05:38:32.921894 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:32 crc kubenswrapper[4747]: I1215 05:38:32.921956 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:32 crc kubenswrapper[4747]: I1215 05:38:32.921966 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:32 crc kubenswrapper[4747]: I1215 05:38:32.921983 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:32 crc kubenswrapper[4747]: I1215 05:38:32.921994 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:32Z","lastTransitionTime":"2025-12-15T05:38:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.023507 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.023556 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.023565 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.023583 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.023594 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:33Z","lastTransitionTime":"2025-12-15T05:38:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.125811 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.125843 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.125851 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.125863 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.125871 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:33Z","lastTransitionTime":"2025-12-15T05:38:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.227443 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.227482 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.227492 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.227505 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.227516 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:33Z","lastTransitionTime":"2025-12-15T05:38:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.329566 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.329594 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.329601 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.329611 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.329621 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:33Z","lastTransitionTime":"2025-12-15T05:38:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.433119 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.433187 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.433200 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.433224 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.433240 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:33Z","lastTransitionTime":"2025-12-15T05:38:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.535885 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.535959 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.535973 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.535989 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.536002 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:33Z","lastTransitionTime":"2025-12-15T05:38:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.628856 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:38:33 crc kubenswrapper[4747]: E1215 05:38:33.628985 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.638335 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.638403 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.638416 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.638432 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.638441 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:33Z","lastTransitionTime":"2025-12-15T05:38:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.740539 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.740684 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.740697 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.740710 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.740718 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:33Z","lastTransitionTime":"2025-12-15T05:38:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.842977 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.843016 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.843025 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.843041 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.843051 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:33Z","lastTransitionTime":"2025-12-15T05:38:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.944821 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.944849 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.944858 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.944872 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:33 crc kubenswrapper[4747]: I1215 05:38:33.944882 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:33Z","lastTransitionTime":"2025-12-15T05:38:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.046892 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.046957 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.046971 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.046983 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.046995 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:34Z","lastTransitionTime":"2025-12-15T05:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.149217 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.149251 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.149262 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.149275 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.149285 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:34Z","lastTransitionTime":"2025-12-15T05:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.250543 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.250595 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.250606 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.250622 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.250633 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:34Z","lastTransitionTime":"2025-12-15T05:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.352762 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.352811 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.352823 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.352835 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.352842 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:34Z","lastTransitionTime":"2025-12-15T05:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.455057 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.455100 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.455113 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.455154 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.455166 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:34Z","lastTransitionTime":"2025-12-15T05:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.556510 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.556554 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.556566 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.556581 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.556592 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:34Z","lastTransitionTime":"2025-12-15T05:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.628398 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.628472 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:38:34 crc kubenswrapper[4747]: E1215 05:38:34.628555 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.628643 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:38:34 crc kubenswrapper[4747]: E1215 05:38:34.628698 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:38:34 crc kubenswrapper[4747]: E1215 05:38:34.628803 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.658050 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.658172 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.658239 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.658307 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.658373 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:34Z","lastTransitionTime":"2025-12-15T05:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.760826 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.760851 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.760860 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.760872 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.760882 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:34Z","lastTransitionTime":"2025-12-15T05:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.863090 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.863130 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.863141 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.863157 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.863168 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:34Z","lastTransitionTime":"2025-12-15T05:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.965129 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.965163 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.965174 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.965186 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:34 crc kubenswrapper[4747]: I1215 05:38:34.965197 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:34Z","lastTransitionTime":"2025-12-15T05:38:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.067572 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.068024 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.068085 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.068146 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.068219 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:35Z","lastTransitionTime":"2025-12-15T05:38:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.170531 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.170570 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.170581 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.170593 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.170600 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:35Z","lastTransitionTime":"2025-12-15T05:38:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.272133 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.272163 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.272171 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.272186 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.272196 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:35Z","lastTransitionTime":"2025-12-15T05:38:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.373786 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.373820 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.373829 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.373841 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.373852 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:35Z","lastTransitionTime":"2025-12-15T05:38:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.476982 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.477028 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.477039 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.477061 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.477072 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:35Z","lastTransitionTime":"2025-12-15T05:38:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.580709 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.580746 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.580757 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.580770 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.580789 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:35Z","lastTransitionTime":"2025-12-15T05:38:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.628170 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:38:35 crc kubenswrapper[4747]: E1215 05:38:35.628281 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.683293 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.683344 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.683354 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.683368 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.683377 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:35Z","lastTransitionTime":"2025-12-15T05:38:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.784838 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.784871 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.784881 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.784910 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.784920 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:35Z","lastTransitionTime":"2025-12-15T05:38:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.887344 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.887393 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.887406 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.887417 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.887430 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:35Z","lastTransitionTime":"2025-12-15T05:38:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.989760 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.989794 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.989804 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.989815 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:35 crc kubenswrapper[4747]: I1215 05:38:35.989823 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:35Z","lastTransitionTime":"2025-12-15T05:38:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.091589 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.091617 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.091645 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.091656 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.091663 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:36Z","lastTransitionTime":"2025-12-15T05:38:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.193382 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.193418 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.193427 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.193442 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.193451 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:36Z","lastTransitionTime":"2025-12-15T05:38:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.295329 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.295367 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.295376 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.295389 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.295399 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:36Z","lastTransitionTime":"2025-12-15T05:38:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.396817 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.396867 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.396875 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.396885 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.396894 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:36Z","lastTransitionTime":"2025-12-15T05:38:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.498563 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.498615 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.498626 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.498649 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.498661 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:36Z","lastTransitionTime":"2025-12-15T05:38:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.600750 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.600811 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.600821 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.600836 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.600848 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:36Z","lastTransitionTime":"2025-12-15T05:38:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.629242 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.629329 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.629259 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:38:36 crc kubenswrapper[4747]: E1215 05:38:36.629406 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:38:36 crc kubenswrapper[4747]: E1215 05:38:36.629710 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:38:36 crc kubenswrapper[4747]: E1215 05:38:36.629813 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.641498 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pc5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39f3b6ddce69ca768ed97f2305771f13fdcc0608fd58b91ee8f9c000e6786af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pc5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:36Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.649663 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d50e5c9-7ce9-40c0-b942-01031654d27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3d9fff925cadcb3478997fe799c7fde5e75f670ad1309f5c6b15c3c534b673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8e211d6177b24044383a0cc22cceb80f6442489a5d2b4adbafc3d36637b3a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nldtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:36Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.663683 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312af0941768922b2c2c313a7209f9271ef4df55fe0be3baa0ebc0d91637b955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://312af0941768922b2c2c313a7209f9271ef4df55fe0be3baa0ebc0d91637b955\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-15T05:38:29Z\\\",\\\"message\\\":\\\"oadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1215 05:38:29.496536 6828 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1215 05:38:29.496184 6828 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-nldtn\\\\nI1215 05:38:29.496566 6828 lb_config.go:1031] Cluster endpoints for openshift-operator-lifecycle-manager/olm-operator-metrics for network=default are: map[]\\\\nI1215 05:38:29.496141 6828 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-4nn8g\\\\nI1215 05:38:29.496766 6828 services_controller.go:443] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.168\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI1215 05:38:29.496231 6828 obj_retry.go:386] Retry successful for *v1.Pod open\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:38:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-82lhw_openshift-ovn-kubernetes(2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-82lhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:36Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.671453 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7ba935d-4d45-497f-a710-482288987eb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d682a9462fba61e03c438d541888778564c5f9614b20ae3415d06039a1b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://918387852c8b6a10cbef90523b68f21472cb57394fe3107fb6a96ac8e76ada07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ff0666091801d67feef4ab5998d6a9c037afa1781db60c2f67046f3ec99a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bc359099b8d59b6ef6bdf25fa9b5b30e446e9897cfb56b16408dab6a88f8c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bc359099b8d59b6ef6bdf25fa9b5b30e446e9897cfb56b16408dab6a88f8c54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:36Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.679428 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:36Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.687657 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:36Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.694272 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p2w9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abdca76-2fcd-44fc-a09d-ded3084306d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e6c91da690038d3b65310ea18371acab81ebb6542f2072f4ac4e08f74f04de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fw7nq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p2w9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:36Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.702537 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.702572 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.702586 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.702808 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.702833 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:36Z","lastTransitionTime":"2025-12-15T05:38:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.705954 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmfps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89350c5d-9a77-499e-81ec-376b012cc219\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7e29913438085594b529ef0499bebcb5d59f0027e5c46d493eb0316c2c553c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31dada17d2bfd9fb0e40e0e67d3d41379a6dabe9f1a7db2b2367a66d120d7e0d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-15T05:38:20Z\\\",\\\"message\\\":\\\"2025-12-15T05:37:35+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a6c939b8-0d27-4921-8e30-40e8be11eed4\\\\n2025-12-15T05:37:35+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a6c939b8-0d27-4921-8e30-40e8be11eed4 to /host/opt/cni/bin/\\\\n2025-12-15T05:37:35Z [verbose] multus-daemon started\\\\n2025-12-15T05:37:35Z [verbose] Readiness Indicator file check\\\\n2025-12-15T05:38:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:38:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbpgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmfps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:36Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.714174 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4nn8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca0b2d2-cd19-409a-aa6d-df8b295adf62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bn8pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bn8pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4nn8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:36Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.724621 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31db1d28-81c8-4eae-989c-49168bd4e711\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f0ff77c32912e767410211079dffc4e0681cf67e040e2cff227825813908d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://675507d9779ef73f4e25201102562a79a43654bce4a6e3b363f03e0e0b697578\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-15T05:37:33Z\\\",\\\"message\\\":\\\"le observer\\\\nW1215 05:37:33.642096 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1215 05:37:33.642525 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1215 05:37:33.645063 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1979444596/tls.crt::/tmp/serving-cert-1979444596/tls.key\\\\\\\"\\\\nI1215 05:37:33.801352 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1215 05:37:33.804097 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1215 05:37:33.804116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1215 05:37:33.804137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1215 05:37:33.804142 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1215 05:37:33.809816 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1215 05:37:33.809843 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1215 05:37:33.809857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1215 05:37:33.809860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1215 05:37:33.809862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1215 05:37:33.810091 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1215 05:37:33.812167 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:36Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.734032 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8b29ca57cdee8b65a3f3a0536e36f0e7e4666850221e623ca0f8186fd18e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:36Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.743654 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://582ccb05087bcbafc98ab0ee2114e9045b2ff301e10ba99cf8ddcd285cae9d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a10d569d887d2d91a5cb36b7193d4bfecf6a3177e238396b8863806aacf9df19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:36Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.751674 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cltgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3c83e90-bb8c-4909-9633-8f59ca12db6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5babf7321502d73aabf3a859529005a01ff938a5bdb4032d6eee6ace84be575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg2m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cltgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:36Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.759731 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02b1ad69-88a4-40e6-a7c4-409b8d42f0cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317d78781624437d29c71b7ca4fa83e80bd20b5f05ab3ed6d1f7177abf0b77d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe7a2e5d3edede6651bef4bed5fb61df5e777cb2da607c135c9e529ad6d58fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://253522075c9e770e558c77f98b826cbb502f402d3806f640dc78d752baf574c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23a7828ef5239d7d64be1ee51a07c50e713808269c5ba2c94fde9e3a2470713a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:36Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.773252 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:36Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.781747 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82d2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9b71b51-500c-4932-b19a-559ec3d15a5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4bc1257d5ff3f04dcdce005fadc221c444afadbe862174495f6191537a58970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjjhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e44712601c304d309e50b76c61dba25a4fd6d982f6dd1df36fb046b0473bb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjjhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-82d2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:36Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.790403 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a223bb34540a7bc57028079a1b97da25865265d93931192b6be84a5625714aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:36Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.805213 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.805254 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.805267 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.805288 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.805301 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:36Z","lastTransitionTime":"2025-12-15T05:38:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.907896 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.907946 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.907973 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.907986 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:36 crc kubenswrapper[4747]: I1215 05:38:36.907996 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:36Z","lastTransitionTime":"2025-12-15T05:38:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.009381 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.009429 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.009439 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.009457 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.009472 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:37Z","lastTransitionTime":"2025-12-15T05:38:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.111577 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.111636 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.111648 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.111662 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.111671 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:37Z","lastTransitionTime":"2025-12-15T05:38:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.214232 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.214272 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.214283 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.214299 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.214309 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:37Z","lastTransitionTime":"2025-12-15T05:38:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.316738 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.316795 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.316805 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.316825 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.316841 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:37Z","lastTransitionTime":"2025-12-15T05:38:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.418701 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.418750 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.418759 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.418787 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.418803 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:37Z","lastTransitionTime":"2025-12-15T05:38:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.520988 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.521029 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.521038 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.521053 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.521064 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:37Z","lastTransitionTime":"2025-12-15T05:38:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.623781 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.624000 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.624076 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.624157 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.624224 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:37Z","lastTransitionTime":"2025-12-15T05:38:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.628699 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:38:37 crc kubenswrapper[4747]: E1215 05:38:37.629006 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.726193 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.726328 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.726421 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.726517 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.726605 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:37Z","lastTransitionTime":"2025-12-15T05:38:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.828685 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.828712 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.828723 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.828736 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.828747 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:37Z","lastTransitionTime":"2025-12-15T05:38:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.931176 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.931228 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.931239 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.931258 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:37 crc kubenswrapper[4747]: I1215 05:38:37.931274 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:37Z","lastTransitionTime":"2025-12-15T05:38:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.032866 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.032911 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.032920 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.032955 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.032969 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:38Z","lastTransitionTime":"2025-12-15T05:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.134672 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.134711 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.134723 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.134738 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.134748 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:38Z","lastTransitionTime":"2025-12-15T05:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.236872 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.236912 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.236921 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.236950 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.236961 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:38Z","lastTransitionTime":"2025-12-15T05:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.339468 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.339494 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.339503 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.339513 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.339522 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:38Z","lastTransitionTime":"2025-12-15T05:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.441542 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.441567 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.441576 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.441584 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.441592 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:38Z","lastTransitionTime":"2025-12-15T05:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.482443 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.482684 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.482695 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.482708 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.482720 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:38Z","lastTransitionTime":"2025-12-15T05:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:38 crc kubenswrapper[4747]: E1215 05:38:38.495448 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4ef03cda-5cb6-4966-bf0f-23d213ae8ebc\\\",\\\"systemUUID\\\":\\\"f6e6c4c5-517c-43b9-abbe-241c399d7f32\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:38Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.499176 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.499233 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.499244 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.499263 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.499275 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:38Z","lastTransitionTime":"2025-12-15T05:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:38 crc kubenswrapper[4747]: E1215 05:38:38.514598 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4ef03cda-5cb6-4966-bf0f-23d213ae8ebc\\\",\\\"systemUUID\\\":\\\"f6e6c4c5-517c-43b9-abbe-241c399d7f32\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:38Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.518233 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.518308 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.518321 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.518338 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.518349 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:38Z","lastTransitionTime":"2025-12-15T05:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:38 crc kubenswrapper[4747]: E1215 05:38:38.527873 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4ef03cda-5cb6-4966-bf0f-23d213ae8ebc\\\",\\\"systemUUID\\\":\\\"f6e6c4c5-517c-43b9-abbe-241c399d7f32\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:38Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.531401 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.531438 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.531452 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.531474 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.531487 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:38Z","lastTransitionTime":"2025-12-15T05:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.535792 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 15 05:38:38 crc kubenswrapper[4747]: E1215 05:38:38.535921 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-15 05:39:42.535890942 +0000 UTC m=+146.232402859 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:38:38 crc kubenswrapper[4747]: E1215 05:38:38.541346 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4ef03cda-5cb6-4966-bf0f-23d213ae8ebc\\\",\\\"systemUUID\\\":\\\"f6e6c4c5-517c-43b9-abbe-241c399d7f32\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:38Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.544762 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.544808 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.544822 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.544836 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.544847 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:38Z","lastTransitionTime":"2025-12-15T05:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:38 crc kubenswrapper[4747]: E1215 05:38:38.553483 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4ef03cda-5cb6-4966-bf0f-23d213ae8ebc\\\",\\\"systemUUID\\\":\\\"f6e6c4c5-517c-43b9-abbe-241c399d7f32\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:38Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:38 crc kubenswrapper[4747]: E1215 05:38:38.553615 4747 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.555040 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.555091 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.555104 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.555117 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.555126 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:38Z","lastTransitionTime":"2025-12-15T05:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.629030 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.629078 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.629033 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:38:38 crc kubenswrapper[4747]: E1215 05:38:38.629185 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:38:38 crc kubenswrapper[4747]: E1215 05:38:38.629279 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:38:38 crc kubenswrapper[4747]: E1215 05:38:38.629419 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.636281 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.636328 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.636360 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.636387 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:38:38 crc kubenswrapper[4747]: E1215 05:38:38.636463 4747 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 15 05:38:38 crc kubenswrapper[4747]: E1215 05:38:38.636508 4747 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 15 05:38:38 crc kubenswrapper[4747]: E1215 05:38:38.636535 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-15 05:39:42.636518625 +0000 UTC m=+146.333030542 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 15 05:38:38 crc kubenswrapper[4747]: E1215 05:38:38.636556 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-15 05:39:42.63654707 +0000 UTC m=+146.333058986 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 15 05:38:38 crc kubenswrapper[4747]: E1215 05:38:38.636563 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 15 05:38:38 crc kubenswrapper[4747]: E1215 05:38:38.636581 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 15 05:38:38 crc kubenswrapper[4747]: E1215 05:38:38.636623 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 15 05:38:38 crc kubenswrapper[4747]: E1215 05:38:38.636591 4747 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 15 05:38:38 crc kubenswrapper[4747]: E1215 05:38:38.636638 4747 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 15 05:38:38 crc kubenswrapper[4747]: E1215 05:38:38.636652 4747 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 15 05:38:38 crc kubenswrapper[4747]: E1215 05:38:38.636717 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-15 05:39:42.636697161 +0000 UTC m=+146.333209088 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 15 05:38:38 crc kubenswrapper[4747]: E1215 05:38:38.636738 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-15 05:39:42.636730624 +0000 UTC m=+146.333242551 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.657436 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.657470 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.657481 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.657497 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.657507 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:38Z","lastTransitionTime":"2025-12-15T05:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.759182 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.759219 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.759229 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.759251 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.759262 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:38Z","lastTransitionTime":"2025-12-15T05:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.861125 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.861159 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.861170 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.861235 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.861250 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:38Z","lastTransitionTime":"2025-12-15T05:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.963330 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.963387 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.963398 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.963411 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:38 crc kubenswrapper[4747]: I1215 05:38:38.963420 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:38Z","lastTransitionTime":"2025-12-15T05:38:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.065054 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.065101 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.065115 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.065133 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.065145 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:39Z","lastTransitionTime":"2025-12-15T05:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.166803 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.166835 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.166845 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.166887 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.166896 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:39Z","lastTransitionTime":"2025-12-15T05:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.269582 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.269612 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.269623 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.269635 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.269646 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:39Z","lastTransitionTime":"2025-12-15T05:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.371540 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.371580 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.371596 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.371612 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.371623 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:39Z","lastTransitionTime":"2025-12-15T05:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.474196 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.474274 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.474288 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.474310 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.474325 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:39Z","lastTransitionTime":"2025-12-15T05:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.576980 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.577039 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.577053 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.577075 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.577089 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:39Z","lastTransitionTime":"2025-12-15T05:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.628661 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:38:39 crc kubenswrapper[4747]: E1215 05:38:39.628828 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.642400 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.679601 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.679637 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.679647 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.679664 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.679675 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:39Z","lastTransitionTime":"2025-12-15T05:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.782253 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.782301 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.782310 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.782326 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.782343 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:39Z","lastTransitionTime":"2025-12-15T05:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.884696 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.884732 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.884742 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.884754 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.884774 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:39Z","lastTransitionTime":"2025-12-15T05:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.986888 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.986920 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.986951 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.986963 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:39 crc kubenswrapper[4747]: I1215 05:38:39.986975 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:39Z","lastTransitionTime":"2025-12-15T05:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:40 crc kubenswrapper[4747]: I1215 05:38:40.088620 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:40 crc kubenswrapper[4747]: I1215 05:38:40.088652 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:40 crc kubenswrapper[4747]: I1215 05:38:40.088661 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:40 crc kubenswrapper[4747]: I1215 05:38:40.088689 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:40 crc kubenswrapper[4747]: I1215 05:38:40.088698 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:40Z","lastTransitionTime":"2025-12-15T05:38:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:40 crc kubenswrapper[4747]: I1215 05:38:40.191046 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:40 crc kubenswrapper[4747]: I1215 05:38:40.191069 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:40 crc kubenswrapper[4747]: I1215 05:38:40.191078 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:40 crc kubenswrapper[4747]: I1215 05:38:40.191090 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:40 crc kubenswrapper[4747]: I1215 05:38:40.191099 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:40Z","lastTransitionTime":"2025-12-15T05:38:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:40 crc kubenswrapper[4747]: I1215 05:38:40.298508 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:40 crc kubenswrapper[4747]: I1215 05:38:40.298560 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:40 crc kubenswrapper[4747]: I1215 05:38:40.298580 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:40 crc kubenswrapper[4747]: I1215 05:38:40.298597 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:40 crc kubenswrapper[4747]: I1215 05:38:40.298606 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:40Z","lastTransitionTime":"2025-12-15T05:38:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:40 crc kubenswrapper[4747]: I1215 05:38:40.401038 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:40 crc kubenswrapper[4747]: I1215 05:38:40.401078 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:40 crc kubenswrapper[4747]: I1215 05:38:40.401087 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:40 crc kubenswrapper[4747]: I1215 05:38:40.401104 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:40 crc kubenswrapper[4747]: I1215 05:38:40.401114 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:40Z","lastTransitionTime":"2025-12-15T05:38:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:40 crc kubenswrapper[4747]: I1215 05:38:40.503496 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:40 crc kubenswrapper[4747]: I1215 05:38:40.503530 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:40 crc kubenswrapper[4747]: I1215 05:38:40.503540 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:40 crc kubenswrapper[4747]: I1215 05:38:40.503552 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:40 crc kubenswrapper[4747]: I1215 05:38:40.503562 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:40Z","lastTransitionTime":"2025-12-15T05:38:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:40 crc kubenswrapper[4747]: I1215 05:38:40.605174 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:40 crc kubenswrapper[4747]: I1215 05:38:40.605206 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:40 crc kubenswrapper[4747]: I1215 05:38:40.605217 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:40 crc kubenswrapper[4747]: I1215 05:38:40.605230 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:40 crc kubenswrapper[4747]: I1215 05:38:40.605242 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:40Z","lastTransitionTime":"2025-12-15T05:38:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:40 crc kubenswrapper[4747]: I1215 05:38:40.628660 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:38:40 crc kubenswrapper[4747]: I1215 05:38:40.628850 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:38:40 crc kubenswrapper[4747]: E1215 05:38:40.629028 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:38:40 crc kubenswrapper[4747]: I1215 05:38:40.629164 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:38:40 crc kubenswrapper[4747]: E1215 05:38:40.629286 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:38:40 crc kubenswrapper[4747]: E1215 05:38:40.629388 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:38:40 crc kubenswrapper[4747]: I1215 05:38:40.640559 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 15 05:38:40 crc kubenswrapper[4747]: I1215 05:38:40.707048 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:40 crc kubenswrapper[4747]: I1215 05:38:40.707097 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:40 crc kubenswrapper[4747]: I1215 05:38:40.707108 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:40 crc kubenswrapper[4747]: I1215 05:38:40.707129 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:40 crc kubenswrapper[4747]: I1215 05:38:40.707142 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:40Z","lastTransitionTime":"2025-12-15T05:38:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:40 crc kubenswrapper[4747]: I1215 05:38:40.809208 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:40 crc kubenswrapper[4747]: I1215 05:38:40.809240 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:40 crc kubenswrapper[4747]: I1215 05:38:40.809265 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:40 crc kubenswrapper[4747]: I1215 05:38:40.809278 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:40 crc kubenswrapper[4747]: I1215 05:38:40.809289 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:40Z","lastTransitionTime":"2025-12-15T05:38:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:40 crc kubenswrapper[4747]: I1215 05:38:40.911148 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:40 crc kubenswrapper[4747]: I1215 05:38:40.911182 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:40 crc kubenswrapper[4747]: I1215 05:38:40.911194 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:40 crc kubenswrapper[4747]: I1215 05:38:40.911208 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:40 crc kubenswrapper[4747]: I1215 05:38:40.911217 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:40Z","lastTransitionTime":"2025-12-15T05:38:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.013369 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.013399 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.013410 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.013422 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.013430 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:41Z","lastTransitionTime":"2025-12-15T05:38:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.115543 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.115580 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.115590 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.115606 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.115617 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:41Z","lastTransitionTime":"2025-12-15T05:38:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.217446 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.217482 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.217494 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.217514 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.217521 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:41Z","lastTransitionTime":"2025-12-15T05:38:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.319552 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.319595 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.319605 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.319623 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.319635 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:41Z","lastTransitionTime":"2025-12-15T05:38:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.421180 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.421219 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.421233 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.421250 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.421261 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:41Z","lastTransitionTime":"2025-12-15T05:38:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.523437 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.523474 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.523483 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.523498 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.523509 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:41Z","lastTransitionTime":"2025-12-15T05:38:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.625592 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.625628 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.625638 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.625654 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.625664 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:41Z","lastTransitionTime":"2025-12-15T05:38:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.628858 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:38:41 crc kubenswrapper[4747]: E1215 05:38:41.628987 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.727995 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.728118 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.728185 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.728244 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.728297 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:41Z","lastTransitionTime":"2025-12-15T05:38:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.830007 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.830079 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.830092 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.830104 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.830115 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:41Z","lastTransitionTime":"2025-12-15T05:38:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.932163 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.932200 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.932209 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.932226 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:41 crc kubenswrapper[4747]: I1215 05:38:41.932236 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:41Z","lastTransitionTime":"2025-12-15T05:38:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.033636 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.033679 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.033692 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.033709 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.033726 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:42Z","lastTransitionTime":"2025-12-15T05:38:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.135838 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.135867 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.135877 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.135893 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.135904 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:42Z","lastTransitionTime":"2025-12-15T05:38:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.237704 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.237743 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.237761 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.237776 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.237787 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:42Z","lastTransitionTime":"2025-12-15T05:38:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.339519 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.339558 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.339569 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.339581 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.339590 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:42Z","lastTransitionTime":"2025-12-15T05:38:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.441333 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.441361 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.441369 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.441382 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.441391 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:42Z","lastTransitionTime":"2025-12-15T05:38:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.543644 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.543684 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.543700 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.543714 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.543723 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:42Z","lastTransitionTime":"2025-12-15T05:38:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.628893 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.628961 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.628893 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:38:42 crc kubenswrapper[4747]: E1215 05:38:42.629060 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:38:42 crc kubenswrapper[4747]: E1215 05:38:42.629162 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:38:42 crc kubenswrapper[4747]: E1215 05:38:42.629247 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.645299 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.645334 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.645348 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.645362 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.645369 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:42Z","lastTransitionTime":"2025-12-15T05:38:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.747790 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.747837 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.747849 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.747863 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.747876 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:42Z","lastTransitionTime":"2025-12-15T05:38:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.850131 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.850190 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.850205 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.850222 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.850246 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:42Z","lastTransitionTime":"2025-12-15T05:38:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.952790 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.952825 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.952838 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.952849 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:42 crc kubenswrapper[4747]: I1215 05:38:42.952857 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:42Z","lastTransitionTime":"2025-12-15T05:38:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.054608 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.054653 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.054665 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.054681 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.054692 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:43Z","lastTransitionTime":"2025-12-15T05:38:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.156965 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.157226 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.157335 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.157414 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.157471 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:43Z","lastTransitionTime":"2025-12-15T05:38:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.259854 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.259888 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.259899 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.259911 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.259920 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:43Z","lastTransitionTime":"2025-12-15T05:38:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.362585 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.362615 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.362626 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.362640 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.362650 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:43Z","lastTransitionTime":"2025-12-15T05:38:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.464360 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.464488 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.464555 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.464616 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.464696 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:43Z","lastTransitionTime":"2025-12-15T05:38:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.566819 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.566973 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.567039 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.567094 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.567145 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:43Z","lastTransitionTime":"2025-12-15T05:38:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.628690 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:38:43 crc kubenswrapper[4747]: E1215 05:38:43.628830 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.669585 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.669677 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.669733 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.669818 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.670201 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:43Z","lastTransitionTime":"2025-12-15T05:38:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.771893 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.771964 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.771975 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.771996 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.772009 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:43Z","lastTransitionTime":"2025-12-15T05:38:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.873993 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.874024 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.874033 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.874047 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.874054 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:43Z","lastTransitionTime":"2025-12-15T05:38:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.975963 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.976015 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.976029 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.976048 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:43 crc kubenswrapper[4747]: I1215 05:38:43.976059 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:43Z","lastTransitionTime":"2025-12-15T05:38:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.077585 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.077621 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.077632 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.077645 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.077657 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:44Z","lastTransitionTime":"2025-12-15T05:38:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.179693 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.179715 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.179727 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.179738 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.179747 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:44Z","lastTransitionTime":"2025-12-15T05:38:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.281648 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.282018 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.282087 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.282161 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.282227 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:44Z","lastTransitionTime":"2025-12-15T05:38:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.383805 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.383856 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.383870 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.383885 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.383896 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:44Z","lastTransitionTime":"2025-12-15T05:38:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.485896 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.485964 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.485977 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.485993 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.486002 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:44Z","lastTransitionTime":"2025-12-15T05:38:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.587477 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.587883 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.587977 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.588078 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.588268 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:44Z","lastTransitionTime":"2025-12-15T05:38:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.629173 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.629197 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.629297 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:38:44 crc kubenswrapper[4747]: E1215 05:38:44.629440 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:38:44 crc kubenswrapper[4747]: E1215 05:38:44.629565 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:38:44 crc kubenswrapper[4747]: E1215 05:38:44.629646 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.690601 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.690647 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.690657 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.690673 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.690687 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:44Z","lastTransitionTime":"2025-12-15T05:38:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.792973 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.793004 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.793015 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.793028 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.793038 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:44Z","lastTransitionTime":"2025-12-15T05:38:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.895357 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.895491 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.895556 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.895639 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.895717 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:44Z","lastTransitionTime":"2025-12-15T05:38:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.998150 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.998257 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.998325 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.998397 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:44 crc kubenswrapper[4747]: I1215 05:38:44.998463 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:44Z","lastTransitionTime":"2025-12-15T05:38:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:45 crc kubenswrapper[4747]: I1215 05:38:45.101088 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:45 crc kubenswrapper[4747]: I1215 05:38:45.101611 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:45 crc kubenswrapper[4747]: I1215 05:38:45.101697 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:45 crc kubenswrapper[4747]: I1215 05:38:45.101798 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:45 crc kubenswrapper[4747]: I1215 05:38:45.101874 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:45Z","lastTransitionTime":"2025-12-15T05:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:45 crc kubenswrapper[4747]: I1215 05:38:45.204552 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:45 crc kubenswrapper[4747]: I1215 05:38:45.204590 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:45 crc kubenswrapper[4747]: I1215 05:38:45.204600 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:45 crc kubenswrapper[4747]: I1215 05:38:45.204612 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:45 crc kubenswrapper[4747]: I1215 05:38:45.204623 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:45Z","lastTransitionTime":"2025-12-15T05:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:45 crc kubenswrapper[4747]: I1215 05:38:45.306703 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:45 crc kubenswrapper[4747]: I1215 05:38:45.306742 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:45 crc kubenswrapper[4747]: I1215 05:38:45.306763 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:45 crc kubenswrapper[4747]: I1215 05:38:45.306777 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:45 crc kubenswrapper[4747]: I1215 05:38:45.306787 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:45Z","lastTransitionTime":"2025-12-15T05:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:45 crc kubenswrapper[4747]: I1215 05:38:45.408915 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:45 crc kubenswrapper[4747]: I1215 05:38:45.408970 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:45 crc kubenswrapper[4747]: I1215 05:38:45.408980 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:45 crc kubenswrapper[4747]: I1215 05:38:45.408992 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:45 crc kubenswrapper[4747]: I1215 05:38:45.409003 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:45Z","lastTransitionTime":"2025-12-15T05:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:45 crc kubenswrapper[4747]: I1215 05:38:45.511218 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:45 crc kubenswrapper[4747]: I1215 05:38:45.511253 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:45 crc kubenswrapper[4747]: I1215 05:38:45.511262 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:45 crc kubenswrapper[4747]: I1215 05:38:45.511274 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:45 crc kubenswrapper[4747]: I1215 05:38:45.511283 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:45Z","lastTransitionTime":"2025-12-15T05:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:45 crc kubenswrapper[4747]: I1215 05:38:45.613773 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:45 crc kubenswrapper[4747]: I1215 05:38:45.613810 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:45 crc kubenswrapper[4747]: I1215 05:38:45.613820 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:45 crc kubenswrapper[4747]: I1215 05:38:45.613834 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:45 crc kubenswrapper[4747]: I1215 05:38:45.613843 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:45Z","lastTransitionTime":"2025-12-15T05:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:45 crc kubenswrapper[4747]: I1215 05:38:45.628129 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:38:45 crc kubenswrapper[4747]: E1215 05:38:45.628243 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:38:45 crc kubenswrapper[4747]: I1215 05:38:45.716448 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:45 crc kubenswrapper[4747]: I1215 05:38:45.716497 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:45 crc kubenswrapper[4747]: I1215 05:38:45.716508 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:45 crc kubenswrapper[4747]: I1215 05:38:45.716528 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:45 crc kubenswrapper[4747]: I1215 05:38:45.716542 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:45Z","lastTransitionTime":"2025-12-15T05:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:45 crc kubenswrapper[4747]: I1215 05:38:45.818444 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:45 crc kubenswrapper[4747]: I1215 05:38:45.818479 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:45 crc kubenswrapper[4747]: I1215 05:38:45.818491 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:45 crc kubenswrapper[4747]: I1215 05:38:45.818505 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:45 crc kubenswrapper[4747]: I1215 05:38:45.818515 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:45Z","lastTransitionTime":"2025-12-15T05:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:45 crc kubenswrapper[4747]: I1215 05:38:45.920912 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:45 crc kubenswrapper[4747]: I1215 05:38:45.920975 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:45 crc kubenswrapper[4747]: I1215 05:38:45.920985 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:45 crc kubenswrapper[4747]: I1215 05:38:45.920999 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:45 crc kubenswrapper[4747]: I1215 05:38:45.921010 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:45Z","lastTransitionTime":"2025-12-15T05:38:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.023010 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.023051 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.023062 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.023074 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.023082 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:46Z","lastTransitionTime":"2025-12-15T05:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.125271 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.125309 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.125337 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.125351 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.125361 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:46Z","lastTransitionTime":"2025-12-15T05:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.226780 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.226823 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.226833 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.226844 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.226856 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:46Z","lastTransitionTime":"2025-12-15T05:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.328425 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.328449 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.328457 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.328468 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.328476 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:46Z","lastTransitionTime":"2025-12-15T05:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.430245 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.430281 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.430294 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.430311 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.430331 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:46Z","lastTransitionTime":"2025-12-15T05:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.532326 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.532384 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.532395 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.532419 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.532432 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:46Z","lastTransitionTime":"2025-12-15T05:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.628468 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:38:46 crc kubenswrapper[4747]: E1215 05:38:46.628648 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.628682 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.628916 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:38:46 crc kubenswrapper[4747]: E1215 05:38:46.629450 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:38:46 crc kubenswrapper[4747]: E1215 05:38:46.629559 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.629604 4747 scope.go:117] "RemoveContainer" containerID="312af0941768922b2c2c313a7209f9271ef4df55fe0be3baa0ebc0d91637b955" Dec 15 05:38:46 crc kubenswrapper[4747]: E1215 05:38:46.629799 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-82lhw_openshift-ovn-kubernetes(2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.633971 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.634069 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.634136 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.634205 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.634268 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:46Z","lastTransitionTime":"2025-12-15T05:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.640074 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02b1ad69-88a4-40e6-a7c4-409b8d42f0cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://317d78781624437d29c71b7ca4fa83e80bd20b5f05ab3ed6d1f7177abf0b77d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe7a2e5d3edede6651bef4bed5fb61df5e777cb2da607c135c9e529ad6d58fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://253522075c9e770e558c77f98b826cbb502f402d3806f640dc78d752baf574c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23a7828ef5239d7d64be1ee51a07c50e713808269c5ba2c94fde9e3a2470713a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:46Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.652073 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:46Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.660949 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82d2t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9b71b51-500c-4932-b19a-559ec3d15a5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4bc1257d5ff3f04dcdce005fadc221c444afadbe862174495f6191537a58970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjjhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e44712601c304d309e50b76c61dba25a4fd6d982f6dd1df36fb046b0473bb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjjhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-82d2t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:46Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.674258 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"922be617-035c-4755-b7b5-53b31200af46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bbbc86cc03ec6d63e721f27569a456fded46b9f2ddc4f808843f153b2ba9b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c291f4c04f2bba5f7e84accfd5d45951ea6104fb615e8df30e2f47e0514cc268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a11b192db156f8fcdbc1060b83350d80b7a1a33dbd35a0af08019384e4b2574a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b9c4bc5a44c96d2498bdaa9c1ca462f39dcdb89cd8df40e47f31fc96d685562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a646158b15b786a1b3196337a7f8e9b60eb779ff6d1fbe6621a8210f834271b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d611c6747452e170832a06a690656905b3cba3c778efaafc06fb1ac664b3e9e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d611c6747452e170832a06a690656905b3cba3c778efaafc06fb1ac664b3e9e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae1eed53c4cf523187d589efd6c88e5e7434da8520f2e8f947ac0ded1a79a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bae1eed53c4cf523187d589efd6c88e5e7434da8520f2e8f947ac0ded1a79a8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://05b4ede1deff8cce04af8da966d035a333c9ba1f6b1800f18aa2a56e5e9c3ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05b4ede1deff8cce04af8da966d035a333c9ba1f6b1800f18aa2a56e5e9c3ca7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:46Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.682254 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a223bb34540a7bc57028079a1b97da25865265d93931192b6be84a5625714aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:46Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.690662 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d50e5c9-7ce9-40c0-b942-01031654d27c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3d9fff925cadcb3478997fe799c7fde5e75f670ad1309f5c6b15c3c534b673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8e211d6177b24044383a0cc22cceb80f6442489a5d2b4adbafc3d36637b3a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wpldx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nldtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:46Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.704271 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312af0941768922b2c2c313a7209f9271ef4df55fe0be3baa0ebc0d91637b955\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://312af0941768922b2c2c313a7209f9271ef4df55fe0be3baa0ebc0d91637b955\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-15T05:38:29Z\\\",\\\"message\\\":\\\"oadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1215 05:38:29.496536 6828 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1215 05:38:29.496184 6828 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-nldtn\\\\nI1215 05:38:29.496566 6828 lb_config.go:1031] Cluster endpoints for openshift-operator-lifecycle-manager/olm-operator-metrics for network=default are: map[]\\\\nI1215 05:38:29.496141 6828 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-4nn8g\\\\nI1215 05:38:29.496766 6828 services_controller.go:443] Built service openshift-operator-lifecycle-manager/olm-operator-metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.168\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI1215 05:38:29.496231 6828 obj_retry.go:386] Retry successful for *v1.Pod open\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:38:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-82lhw_openshift-ovn-kubernetes(2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwzq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-82lhw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:46Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.712920 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6dfabe3-f286-477d-9a2c-eafc2bb8aaec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c45c82ced788dd70d55b8e7aa52c86b1345a6b23f4b1869d80b42b32a0cb7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec4f7d4028bf6b15095d8a52e3a5a0faf94db3b63c29f93d93a380bebb963c49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec4f7d4028bf6b15095d8a52e3a5a0faf94db3b63c29f93d93a380bebb963c49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:46Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.722823 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7ba935d-4d45-497f-a710-482288987eb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d682a9462fba61e03c438d541888778564c5f9614b20ae3415d06039a1b422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://918387852c8b6a10cbef90523b68f21472cb57394fe3107fb6a96ac8e76ada07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ff0666091801d67feef4ab5998d6a9c037afa1781db60c2f67046f3ec99a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8bc359099b8d59b6ef6bdf25fa9b5b30e446e9897cfb56b16408dab6a88f8c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8bc359099b8d59b6ef6bdf25fa9b5b30e446e9897cfb56b16408dab6a88f8c54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:46Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.731653 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:46Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.736204 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.736249 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.736264 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.736282 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.736294 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:46Z","lastTransitionTime":"2025-12-15T05:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.741245 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:46Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.751910 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pc5tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b19a93a-5d3a-44c6-b207-8e4ee3be6c20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39f3b6ddce69ca768ed97f2305771f13fdcc0608fd58b91ee8f9c000e6786af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb1e9ca94f53b690bc5f839c68a716b00cebe6352965f681251b3b6c15dc7273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc453c7398c9accc43d9efe655feb40fa96b54cafbc36707061a86f8a3b79ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498d82281368982c61afe6b4d26f80aed897a7854a7f777c18c62754e695626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fb8f4d8ec9cac62a29cf68deb185e9a5250fbbbcc54271eab4b883acfd86162\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fe78ca73e8f03ebd28a2fc514b5e974aef00ff2fa52e4f54e8124fef7678278\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64335267ac648afb02252060e4b4d687d9a2edef8d5bc84cba6bc67c9e47d218\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfkwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pc5tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:46Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.760852 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmfps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89350c5d-9a77-499e-81ec-376b012cc219\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7e29913438085594b529ef0499bebcb5d59f0027e5c46d493eb0316c2c553c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31dada17d2bfd9fb0e40e0e67d3d41379a6dabe9f1a7db2b2367a66d120d7e0d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-15T05:38:20Z\\\",\\\"message\\\":\\\"2025-12-15T05:37:35+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a6c939b8-0d27-4921-8e30-40e8be11eed4\\\\n2025-12-15T05:37:35+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a6c939b8-0d27-4921-8e30-40e8be11eed4 to /host/opt/cni/bin/\\\\n2025-12-15T05:37:35Z [verbose] multus-daemon started\\\\n2025-12-15T05:37:35Z [verbose] Readiness Indicator file check\\\\n2025-12-15T05:38:20Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:38:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbpgt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmfps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:46Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.768335 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4nn8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fca0b2d2-cd19-409a-aa6d-df8b295adf62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bn8pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bn8pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4nn8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:46Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.777536 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31db1d28-81c8-4eae-989c-49168bd4e711\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f0ff77c32912e767410211079dffc4e0681cf67e040e2cff227825813908d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://675507d9779ef73f4e25201102562a79a43654bce4a6e3b363f03e0e0b697578\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-15T05:37:33Z\\\",\\\"message\\\":\\\"le observer\\\\nW1215 05:37:33.642096 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1215 05:37:33.642525 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1215 05:37:33.645063 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1979444596/tls.crt::/tmp/serving-cert-1979444596/tls.key\\\\\\\"\\\\nI1215 05:37:33.801352 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1215 05:37:33.804097 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1215 05:37:33.804116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1215 05:37:33.804137 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1215 05:37:33.804142 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1215 05:37:33.809816 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1215 05:37:33.809843 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809848 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1215 05:37:33.809855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1215 05:37:33.809857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1215 05:37:33.809860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1215 05:37:33.809862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1215 05:37:33.810091 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1215 05:37:33.812167 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-15T05:37:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-15T05:37:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:46Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.786206 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f8b29ca57cdee8b65a3f3a0536e36f0e7e4666850221e623ca0f8186fd18e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:46Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.795405 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://582ccb05087bcbafc98ab0ee2114e9045b2ff301e10ba99cf8ddcd285cae9d64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a10d569d887d2d91a5cb36b7193d4bfecf6a3177e238396b8863806aacf9df19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:46Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.803451 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cltgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3c83e90-bb8c-4909-9633-8f59ca12db6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5babf7321502d73aabf3a859529005a01ff938a5bdb4032d6eee6ace84be575\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg2m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cltgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:46Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.811141 4747 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p2w9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abdca76-2fcd-44fc-a09d-ded3084306d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-15T05:37:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e6c91da690038d3b65310ea18371acab81ebb6542f2072f4ac4e08f74f04de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-15T05:37:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fw7nq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-15T05:37:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p2w9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:46Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.838841 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.839011 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.839127 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.839220 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.839328 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:46Z","lastTransitionTime":"2025-12-15T05:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.941008 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.941181 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.941285 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.941355 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:46 crc kubenswrapper[4747]: I1215 05:38:46.941412 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:46Z","lastTransitionTime":"2025-12-15T05:38:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.043244 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.043285 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.043297 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.043316 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.043329 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:47Z","lastTransitionTime":"2025-12-15T05:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.144994 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.145036 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.145046 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.145063 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.145081 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:47Z","lastTransitionTime":"2025-12-15T05:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.247331 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.247384 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.247394 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.247411 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.247423 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:47Z","lastTransitionTime":"2025-12-15T05:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.349260 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.349317 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.349329 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.349344 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.349354 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:47Z","lastTransitionTime":"2025-12-15T05:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.451311 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.451344 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.451353 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.451365 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.451384 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:47Z","lastTransitionTime":"2025-12-15T05:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.552378 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.552417 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.552426 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.552437 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.552446 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:47Z","lastTransitionTime":"2025-12-15T05:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.628809 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:38:47 crc kubenswrapper[4747]: E1215 05:38:47.628967 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.654347 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.654379 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.654389 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.654400 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.654411 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:47Z","lastTransitionTime":"2025-12-15T05:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.756231 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.756282 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.756293 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.756305 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.756316 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:47Z","lastTransitionTime":"2025-12-15T05:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.858103 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.858142 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.858152 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.858167 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.858182 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:47Z","lastTransitionTime":"2025-12-15T05:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.960290 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.960319 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.960327 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.960338 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:47 crc kubenswrapper[4747]: I1215 05:38:47.960347 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:47Z","lastTransitionTime":"2025-12-15T05:38:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.062115 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.062146 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.062155 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.062166 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.062176 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:48Z","lastTransitionTime":"2025-12-15T05:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.163996 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.164049 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.164059 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.164069 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.164077 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:48Z","lastTransitionTime":"2025-12-15T05:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.266148 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.266185 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.266195 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.266207 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.266218 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:48Z","lastTransitionTime":"2025-12-15T05:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.368220 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.368255 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.368265 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.368283 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.368294 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:48Z","lastTransitionTime":"2025-12-15T05:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.470375 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.470416 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.470426 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.470439 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.470460 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:48Z","lastTransitionTime":"2025-12-15T05:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.571860 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.572086 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.572103 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.572129 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.572152 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:48Z","lastTransitionTime":"2025-12-15T05:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.594169 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.594208 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.594221 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.594237 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.594247 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:48Z","lastTransitionTime":"2025-12-15T05:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:48 crc kubenswrapper[4747]: E1215 05:38:48.604667 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4ef03cda-5cb6-4966-bf0f-23d213ae8ebc\\\",\\\"systemUUID\\\":\\\"f6e6c4c5-517c-43b9-abbe-241c399d7f32\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:48Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.608163 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.608200 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.608215 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.608228 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.608242 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:48Z","lastTransitionTime":"2025-12-15T05:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:48 crc kubenswrapper[4747]: E1215 05:38:48.618223 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4ef03cda-5cb6-4966-bf0f-23d213ae8ebc\\\",\\\"systemUUID\\\":\\\"f6e6c4c5-517c-43b9-abbe-241c399d7f32\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:48Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.620817 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.620854 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.620868 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.620883 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.620897 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:48Z","lastTransitionTime":"2025-12-15T05:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.628676 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.628699 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:38:48 crc kubenswrapper[4747]: E1215 05:38:48.628804 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:38:48 crc kubenswrapper[4747]: E1215 05:38:48.628988 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.629140 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:38:48 crc kubenswrapper[4747]: E1215 05:38:48.629311 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:38:48 crc kubenswrapper[4747]: E1215 05:38:48.629297 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4ef03cda-5cb6-4966-bf0f-23d213ae8ebc\\\",\\\"systemUUID\\\":\\\"f6e6c4c5-517c-43b9-abbe-241c399d7f32\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:48Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.632653 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.632681 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.632691 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.632704 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.632720 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:48Z","lastTransitionTime":"2025-12-15T05:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:48 crc kubenswrapper[4747]: E1215 05:38:48.640887 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4ef03cda-5cb6-4966-bf0f-23d213ae8ebc\\\",\\\"systemUUID\\\":\\\"f6e6c4c5-517c-43b9-abbe-241c399d7f32\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:48Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.643601 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.643624 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.643632 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.643645 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.643652 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:48Z","lastTransitionTime":"2025-12-15T05:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:48 crc kubenswrapper[4747]: E1215 05:38:48.651500 4747 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-15T05:38:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-15T05:38:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4ef03cda-5cb6-4966-bf0f-23d213ae8ebc\\\",\\\"systemUUID\\\":\\\"f6e6c4c5-517c-43b9-abbe-241c399d7f32\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-15T05:38:48Z is after 2025-08-24T17:21:41Z" Dec 15 05:38:48 crc kubenswrapper[4747]: E1215 05:38:48.651609 4747 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.674108 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.674142 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.674155 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.674168 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.674177 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:48Z","lastTransitionTime":"2025-12-15T05:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.776583 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.776635 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.776656 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.776673 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.776691 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:48Z","lastTransitionTime":"2025-12-15T05:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.879241 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.879352 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.879430 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.879501 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.879564 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:48Z","lastTransitionTime":"2025-12-15T05:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.981298 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.981406 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.981472 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.981546 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:48 crc kubenswrapper[4747]: I1215 05:38:48.981596 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:48Z","lastTransitionTime":"2025-12-15T05:38:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:49 crc kubenswrapper[4747]: I1215 05:38:49.083832 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:49 crc kubenswrapper[4747]: I1215 05:38:49.083869 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:49 crc kubenswrapper[4747]: I1215 05:38:49.083877 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:49 crc kubenswrapper[4747]: I1215 05:38:49.083888 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:49 crc kubenswrapper[4747]: I1215 05:38:49.083895 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:49Z","lastTransitionTime":"2025-12-15T05:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:49 crc kubenswrapper[4747]: I1215 05:38:49.186287 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:49 crc kubenswrapper[4747]: I1215 05:38:49.186318 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:49 crc kubenswrapper[4747]: I1215 05:38:49.186329 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:49 crc kubenswrapper[4747]: I1215 05:38:49.186340 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:49 crc kubenswrapper[4747]: I1215 05:38:49.186348 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:49Z","lastTransitionTime":"2025-12-15T05:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:49 crc kubenswrapper[4747]: I1215 05:38:49.289107 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:49 crc kubenswrapper[4747]: I1215 05:38:49.289136 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:49 crc kubenswrapper[4747]: I1215 05:38:49.289147 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:49 crc kubenswrapper[4747]: I1215 05:38:49.289160 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:49 crc kubenswrapper[4747]: I1215 05:38:49.289169 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:49Z","lastTransitionTime":"2025-12-15T05:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:49 crc kubenswrapper[4747]: I1215 05:38:49.391519 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:49 crc kubenswrapper[4747]: I1215 05:38:49.392006 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:49 crc kubenswrapper[4747]: I1215 05:38:49.392078 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:49 crc kubenswrapper[4747]: I1215 05:38:49.392147 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:49 crc kubenswrapper[4747]: I1215 05:38:49.392202 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:49Z","lastTransitionTime":"2025-12-15T05:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:49 crc kubenswrapper[4747]: I1215 05:38:49.493747 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:49 crc kubenswrapper[4747]: I1215 05:38:49.493797 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:49 crc kubenswrapper[4747]: I1215 05:38:49.493807 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:49 crc kubenswrapper[4747]: I1215 05:38:49.493831 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:49 crc kubenswrapper[4747]: I1215 05:38:49.493843 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:49Z","lastTransitionTime":"2025-12-15T05:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:49 crc kubenswrapper[4747]: I1215 05:38:49.596513 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:49 crc kubenswrapper[4747]: I1215 05:38:49.596547 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:49 crc kubenswrapper[4747]: I1215 05:38:49.596557 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:49 crc kubenswrapper[4747]: I1215 05:38:49.596572 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:49 crc kubenswrapper[4747]: I1215 05:38:49.596584 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:49Z","lastTransitionTime":"2025-12-15T05:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:49 crc kubenswrapper[4747]: I1215 05:38:49.629052 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:38:49 crc kubenswrapper[4747]: E1215 05:38:49.629195 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:38:49 crc kubenswrapper[4747]: I1215 05:38:49.698431 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:49 crc kubenswrapper[4747]: I1215 05:38:49.698492 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:49 crc kubenswrapper[4747]: I1215 05:38:49.698502 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:49 crc kubenswrapper[4747]: I1215 05:38:49.698521 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:49 crc kubenswrapper[4747]: I1215 05:38:49.698534 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:49Z","lastTransitionTime":"2025-12-15T05:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:49 crc kubenswrapper[4747]: I1215 05:38:49.800445 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:49 crc kubenswrapper[4747]: I1215 05:38:49.800475 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:49 crc kubenswrapper[4747]: I1215 05:38:49.800486 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:49 crc kubenswrapper[4747]: I1215 05:38:49.800498 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:49 crc kubenswrapper[4747]: I1215 05:38:49.800506 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:49Z","lastTransitionTime":"2025-12-15T05:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:49 crc kubenswrapper[4747]: I1215 05:38:49.902091 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:49 crc kubenswrapper[4747]: I1215 05:38:49.902121 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:49 crc kubenswrapper[4747]: I1215 05:38:49.902130 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:49 crc kubenswrapper[4747]: I1215 05:38:49.902142 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:49 crc kubenswrapper[4747]: I1215 05:38:49.902152 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:49Z","lastTransitionTime":"2025-12-15T05:38:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.004082 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.004100 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.004111 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.004122 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.004130 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:50Z","lastTransitionTime":"2025-12-15T05:38:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.105971 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.106011 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.106023 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.106036 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.106200 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:50Z","lastTransitionTime":"2025-12-15T05:38:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.208704 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.208779 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.208791 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.208807 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.208818 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:50Z","lastTransitionTime":"2025-12-15T05:38:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.310552 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.310596 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.310606 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.310627 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.310640 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:50Z","lastTransitionTime":"2025-12-15T05:38:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.412900 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.412956 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.412987 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.413003 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.413012 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:50Z","lastTransitionTime":"2025-12-15T05:38:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.514720 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.514774 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.514782 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.514798 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.514808 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:50Z","lastTransitionTime":"2025-12-15T05:38:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.617288 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.617325 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.617356 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.617370 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.617380 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:50Z","lastTransitionTime":"2025-12-15T05:38:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.628685 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.628879 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:38:50 crc kubenswrapper[4747]: E1215 05:38:50.628874 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.629187 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:38:50 crc kubenswrapper[4747]: E1215 05:38:50.629196 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:38:50 crc kubenswrapper[4747]: E1215 05:38:50.629523 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.718912 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.718959 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.718971 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.718985 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.718995 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:50Z","lastTransitionTime":"2025-12-15T05:38:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.821422 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.821461 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.821471 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.821485 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.821498 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:50Z","lastTransitionTime":"2025-12-15T05:38:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.924390 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.924426 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.924435 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.924447 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:50 crc kubenswrapper[4747]: I1215 05:38:50.924456 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:50Z","lastTransitionTime":"2025-12-15T05:38:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.027774 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.027814 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.027825 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.027845 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.027858 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:51Z","lastTransitionTime":"2025-12-15T05:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.129704 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.129980 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.130000 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.130012 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.130031 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:51Z","lastTransitionTime":"2025-12-15T05:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.231959 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.232002 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.232013 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.232027 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.232037 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:51Z","lastTransitionTime":"2025-12-15T05:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.334114 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.334146 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.334155 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.334167 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.334175 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:51Z","lastTransitionTime":"2025-12-15T05:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.436503 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.436580 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.436592 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.436608 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.436617 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:51Z","lastTransitionTime":"2025-12-15T05:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.538865 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.539018 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.539092 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.539166 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.539221 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:51Z","lastTransitionTime":"2025-12-15T05:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.628754 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:38:51 crc kubenswrapper[4747]: E1215 05:38:51.629048 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.641597 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.641647 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.641658 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.641676 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.641688 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:51Z","lastTransitionTime":"2025-12-15T05:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.743667 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.743701 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.743711 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.743727 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.743749 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:51Z","lastTransitionTime":"2025-12-15T05:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.757185 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fca0b2d2-cd19-409a-aa6d-df8b295adf62-metrics-certs\") pod \"network-metrics-daemon-4nn8g\" (UID: \"fca0b2d2-cd19-409a-aa6d-df8b295adf62\") " pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:38:51 crc kubenswrapper[4747]: E1215 05:38:51.757314 4747 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 15 05:38:51 crc kubenswrapper[4747]: E1215 05:38:51.757373 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fca0b2d2-cd19-409a-aa6d-df8b295adf62-metrics-certs podName:fca0b2d2-cd19-409a-aa6d-df8b295adf62 nodeName:}" failed. No retries permitted until 2025-12-15 05:39:55.757357094 +0000 UTC m=+159.453869012 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fca0b2d2-cd19-409a-aa6d-df8b295adf62-metrics-certs") pod "network-metrics-daemon-4nn8g" (UID: "fca0b2d2-cd19-409a-aa6d-df8b295adf62") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.846274 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.846317 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.846328 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.846340 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.846354 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:51Z","lastTransitionTime":"2025-12-15T05:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.948660 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.948689 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.948699 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.948713 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:51 crc kubenswrapper[4747]: I1215 05:38:51.948724 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:51Z","lastTransitionTime":"2025-12-15T05:38:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.050768 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.050811 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.050821 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.050849 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.050862 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:52Z","lastTransitionTime":"2025-12-15T05:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.152983 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.153013 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.153025 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.153055 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.153064 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:52Z","lastTransitionTime":"2025-12-15T05:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.254792 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.254845 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.254857 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.254871 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.254885 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:52Z","lastTransitionTime":"2025-12-15T05:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.357242 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.357300 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.357311 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.357339 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.357353 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:52Z","lastTransitionTime":"2025-12-15T05:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.459483 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.459568 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.459581 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.459607 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.459623 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:52Z","lastTransitionTime":"2025-12-15T05:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.561737 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.561774 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.561782 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.561811 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.561819 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:52Z","lastTransitionTime":"2025-12-15T05:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.628799 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.628830 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.628989 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:38:52 crc kubenswrapper[4747]: E1215 05:38:52.629137 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:38:52 crc kubenswrapper[4747]: E1215 05:38:52.629302 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:38:52 crc kubenswrapper[4747]: E1215 05:38:52.629406 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.663754 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.663796 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.663807 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.663825 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.663838 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:52Z","lastTransitionTime":"2025-12-15T05:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.765697 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.765739 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.765750 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.765764 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.765776 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:52Z","lastTransitionTime":"2025-12-15T05:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.867335 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.867362 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.867370 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.867381 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.867391 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:52Z","lastTransitionTime":"2025-12-15T05:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.969475 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.969510 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.969520 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.969532 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:52 crc kubenswrapper[4747]: I1215 05:38:52.969546 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:52Z","lastTransitionTime":"2025-12-15T05:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.071603 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.071632 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.071641 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.071653 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.071661 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:53Z","lastTransitionTime":"2025-12-15T05:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.173244 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.173281 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.173296 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.173309 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.173319 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:53Z","lastTransitionTime":"2025-12-15T05:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.275633 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.275770 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.275834 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.275920 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.275997 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:53Z","lastTransitionTime":"2025-12-15T05:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.377385 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.377421 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.377430 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.377441 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.377449 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:53Z","lastTransitionTime":"2025-12-15T05:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.479322 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.479681 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.479760 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.479824 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.479885 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:53Z","lastTransitionTime":"2025-12-15T05:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.581594 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.581640 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.581655 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.581668 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.581676 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:53Z","lastTransitionTime":"2025-12-15T05:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.628146 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:38:53 crc kubenswrapper[4747]: E1215 05:38:53.628271 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.683807 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.683837 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.683847 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.683858 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.683867 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:53Z","lastTransitionTime":"2025-12-15T05:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.789178 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.789208 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.789220 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.789234 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.789245 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:53Z","lastTransitionTime":"2025-12-15T05:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.890708 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.890755 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.890764 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.890779 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.890788 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:53Z","lastTransitionTime":"2025-12-15T05:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.992566 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.992610 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.992618 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.992628 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:53 crc kubenswrapper[4747]: I1215 05:38:53.992638 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:53Z","lastTransitionTime":"2025-12-15T05:38:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:54 crc kubenswrapper[4747]: I1215 05:38:54.094184 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:54 crc kubenswrapper[4747]: I1215 05:38:54.094215 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:54 crc kubenswrapper[4747]: I1215 05:38:54.094246 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:54 crc kubenswrapper[4747]: I1215 05:38:54.094260 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:54 crc kubenswrapper[4747]: I1215 05:38:54.094271 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:54Z","lastTransitionTime":"2025-12-15T05:38:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:54 crc kubenswrapper[4747]: I1215 05:38:54.196405 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:54 crc kubenswrapper[4747]: I1215 05:38:54.196439 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:54 crc kubenswrapper[4747]: I1215 05:38:54.196451 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:54 crc kubenswrapper[4747]: I1215 05:38:54.196466 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:54 crc kubenswrapper[4747]: I1215 05:38:54.196474 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:54Z","lastTransitionTime":"2025-12-15T05:38:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:54 crc kubenswrapper[4747]: I1215 05:38:54.298206 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:54 crc kubenswrapper[4747]: I1215 05:38:54.298254 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:54 crc kubenswrapper[4747]: I1215 05:38:54.298266 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:54 crc kubenswrapper[4747]: I1215 05:38:54.298284 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:54 crc kubenswrapper[4747]: I1215 05:38:54.298296 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:54Z","lastTransitionTime":"2025-12-15T05:38:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:54 crc kubenswrapper[4747]: I1215 05:38:54.400537 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:54 crc kubenswrapper[4747]: I1215 05:38:54.400576 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:54 crc kubenswrapper[4747]: I1215 05:38:54.400589 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:54 crc kubenswrapper[4747]: I1215 05:38:54.400600 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:54 crc kubenswrapper[4747]: I1215 05:38:54.400609 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:54Z","lastTransitionTime":"2025-12-15T05:38:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:54 crc kubenswrapper[4747]: I1215 05:38:54.502960 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:54 crc kubenswrapper[4747]: I1215 05:38:54.502994 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:54 crc kubenswrapper[4747]: I1215 05:38:54.503006 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:54 crc kubenswrapper[4747]: I1215 05:38:54.503036 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:54 crc kubenswrapper[4747]: I1215 05:38:54.503044 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:54Z","lastTransitionTime":"2025-12-15T05:38:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:54 crc kubenswrapper[4747]: I1215 05:38:54.604845 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:54 crc kubenswrapper[4747]: I1215 05:38:54.604890 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:54 crc kubenswrapper[4747]: I1215 05:38:54.604901 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:54 crc kubenswrapper[4747]: I1215 05:38:54.604920 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:54 crc kubenswrapper[4747]: I1215 05:38:54.604949 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:54Z","lastTransitionTime":"2025-12-15T05:38:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:54 crc kubenswrapper[4747]: I1215 05:38:54.628297 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:38:54 crc kubenswrapper[4747]: I1215 05:38:54.628334 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:38:54 crc kubenswrapper[4747]: I1215 05:38:54.628363 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:38:54 crc kubenswrapper[4747]: E1215 05:38:54.628451 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:38:54 crc kubenswrapper[4747]: E1215 05:38:54.628570 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:38:54 crc kubenswrapper[4747]: E1215 05:38:54.628705 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:38:54 crc kubenswrapper[4747]: I1215 05:38:54.706914 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:54 crc kubenswrapper[4747]: I1215 05:38:54.706957 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:54 crc kubenswrapper[4747]: I1215 05:38:54.706984 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:54 crc kubenswrapper[4747]: I1215 05:38:54.706995 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:54 crc kubenswrapper[4747]: I1215 05:38:54.707003 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:54Z","lastTransitionTime":"2025-12-15T05:38:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:54 crc kubenswrapper[4747]: I1215 05:38:54.809123 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:54 crc kubenswrapper[4747]: I1215 05:38:54.809166 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:54 crc kubenswrapper[4747]: I1215 05:38:54.809177 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:54 crc kubenswrapper[4747]: I1215 05:38:54.809193 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:54 crc kubenswrapper[4747]: I1215 05:38:54.809205 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:54Z","lastTransitionTime":"2025-12-15T05:38:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:54 crc kubenswrapper[4747]: I1215 05:38:54.911180 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:54 crc kubenswrapper[4747]: I1215 05:38:54.911208 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:54 crc kubenswrapper[4747]: I1215 05:38:54.911219 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:54 crc kubenswrapper[4747]: I1215 05:38:54.911231 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:54 crc kubenswrapper[4747]: I1215 05:38:54.911242 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:54Z","lastTransitionTime":"2025-12-15T05:38:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.013326 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.013351 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.013360 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.013372 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.013381 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:55Z","lastTransitionTime":"2025-12-15T05:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.115881 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.115913 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.115948 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.115961 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.115970 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:55Z","lastTransitionTime":"2025-12-15T05:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.217967 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.218002 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.218012 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.218028 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.218037 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:55Z","lastTransitionTime":"2025-12-15T05:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.319428 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.319457 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.319467 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.319490 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.319500 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:55Z","lastTransitionTime":"2025-12-15T05:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.421094 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.421128 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.421136 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.421147 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.421155 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:55Z","lastTransitionTime":"2025-12-15T05:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.522824 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.523061 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.523074 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.523088 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.523099 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:55Z","lastTransitionTime":"2025-12-15T05:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.625777 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.625829 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.625840 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.625851 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.625860 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:55Z","lastTransitionTime":"2025-12-15T05:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.628190 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:38:55 crc kubenswrapper[4747]: E1215 05:38:55.628340 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.727845 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.727890 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.727900 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.727916 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.727943 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:55Z","lastTransitionTime":"2025-12-15T05:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.829990 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.830023 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.830036 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.830048 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.830059 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:55Z","lastTransitionTime":"2025-12-15T05:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.931968 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.932030 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.932041 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.932052 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:55 crc kubenswrapper[4747]: I1215 05:38:55.932061 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:55Z","lastTransitionTime":"2025-12-15T05:38:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.034449 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.034497 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.034506 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.034525 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.034542 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:56Z","lastTransitionTime":"2025-12-15T05:38:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.137122 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.137158 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.137169 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.137180 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.137194 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:56Z","lastTransitionTime":"2025-12-15T05:38:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.239331 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.239364 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.239374 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.239387 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.239396 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:56Z","lastTransitionTime":"2025-12-15T05:38:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.341216 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.341250 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.341260 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.341275 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.341288 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:56Z","lastTransitionTime":"2025-12-15T05:38:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.443400 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.443435 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.443445 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.443456 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.443463 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:56Z","lastTransitionTime":"2025-12-15T05:38:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.545128 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.545166 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.545177 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.545234 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.545252 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:56Z","lastTransitionTime":"2025-12-15T05:38:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.628677 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:38:56 crc kubenswrapper[4747]: E1215 05:38:56.628800 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.628853 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.628686 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:38:56 crc kubenswrapper[4747]: E1215 05:38:56.628984 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:38:56 crc kubenswrapper[4747]: E1215 05:38:56.629156 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.646588 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.646621 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.646630 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.646644 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.646655 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:56Z","lastTransitionTime":"2025-12-15T05:38:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.689560 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=17.689544618 podStartE2EDuration="17.689544618s" podCreationTimestamp="2025-12-15 05:38:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:38:56.672137113 +0000 UTC m=+100.368649031" watchObservedRunningTime="2025-12-15 05:38:56.689544618 +0000 UTC m=+100.386056534" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.690053 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=50.690047952 podStartE2EDuration="50.690047952s" podCreationTimestamp="2025-12-15 05:38:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:38:56.689131662 +0000 UTC m=+100.385643589" watchObservedRunningTime="2025-12-15 05:38:56.690047952 +0000 UTC m=+100.386559869" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.730768 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-pc5tw" podStartSLOduration=82.730748599 podStartE2EDuration="1m22.730748599s" podCreationTimestamp="2025-12-15 05:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:38:56.722281171 +0000 UTC m=+100.418793088" watchObservedRunningTime="2025-12-15 05:38:56.730748599 +0000 UTC m=+100.427260517" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.746856 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podStartSLOduration=82.746840012 podStartE2EDuration="1m22.746840012s" podCreationTimestamp="2025-12-15 05:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:38:56.731017914 +0000 UTC m=+100.427529832" watchObservedRunningTime="2025-12-15 05:38:56.746840012 +0000 UTC m=+100.443351929" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.748632 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.748659 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.748668 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.748683 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.748693 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:56Z","lastTransitionTime":"2025-12-15T05:38:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.763090 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=16.763075334 podStartE2EDuration="16.763075334s" podCreationTimestamp="2025-12-15 05:38:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:38:56.753490649 +0000 UTC m=+100.450002566" watchObservedRunningTime="2025-12-15 05:38:56.763075334 +0000 UTC m=+100.459587252" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.779837 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-cltgw" podStartSLOduration=82.779814964 podStartE2EDuration="1m22.779814964s" podCreationTimestamp="2025-12-15 05:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:38:56.779758508 +0000 UTC m=+100.476270425" watchObservedRunningTime="2025-12-15 05:38:56.779814964 +0000 UTC m=+100.476326881" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.787393 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-p2w9d" podStartSLOduration=82.787383233 podStartE2EDuration="1m22.787383233s" podCreationTimestamp="2025-12-15 05:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:38:56.786895467 +0000 UTC m=+100.483407384" watchObservedRunningTime="2025-12-15 05:38:56.787383233 +0000 UTC m=+100.483895150" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.798263 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-gmfps" podStartSLOduration=82.798233766 podStartE2EDuration="1m22.798233766s" podCreationTimestamp="2025-12-15 05:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:38:56.797846249 +0000 UTC m=+100.494358166" watchObservedRunningTime="2025-12-15 05:38:56.798233766 +0000 UTC m=+100.494745682" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.821469 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=82.821455656 podStartE2EDuration="1m22.821455656s" podCreationTimestamp="2025-12-15 05:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:38:56.821157566 +0000 UTC m=+100.517669483" watchObservedRunningTime="2025-12-15 05:38:56.821455656 +0000 UTC m=+100.517967574" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.849580 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=77.84955852 podStartE2EDuration="1m17.84955852s" podCreationTimestamp="2025-12-15 05:37:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:38:56.849152277 +0000 UTC m=+100.545664194" watchObservedRunningTime="2025-12-15 05:38:56.84955852 +0000 UTC m=+100.546070437" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.849981 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-82d2t" podStartSLOduration=82.849976596 podStartE2EDuration="1m22.849976596s" podCreationTimestamp="2025-12-15 05:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:38:56.838121868 +0000 UTC m=+100.534633785" watchObservedRunningTime="2025-12-15 05:38:56.849976596 +0000 UTC m=+100.546488512" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.850477 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.850506 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.850516 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.850529 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.850539 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:56Z","lastTransitionTime":"2025-12-15T05:38:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.952667 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.952753 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.952773 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.952798 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:56 crc kubenswrapper[4747]: I1215 05:38:56.952815 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:56Z","lastTransitionTime":"2025-12-15T05:38:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.054967 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.055022 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.055035 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.055056 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.055072 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:57Z","lastTransitionTime":"2025-12-15T05:38:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.157232 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.157266 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.157275 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.157289 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.157301 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:57Z","lastTransitionTime":"2025-12-15T05:38:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.265814 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.265950 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.266034 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.266107 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.266169 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:57Z","lastTransitionTime":"2025-12-15T05:38:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.368679 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.368731 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.368743 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.368761 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.368774 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:57Z","lastTransitionTime":"2025-12-15T05:38:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.470746 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.470780 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.470791 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.470803 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.470813 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:57Z","lastTransitionTime":"2025-12-15T05:38:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.573514 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.573554 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.573565 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.573580 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.573593 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:57Z","lastTransitionTime":"2025-12-15T05:38:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.629114 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:38:57 crc kubenswrapper[4747]: E1215 05:38:57.629357 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.675424 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.675461 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.675472 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.675486 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.675503 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:57Z","lastTransitionTime":"2025-12-15T05:38:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.777787 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.777823 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.777834 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.777848 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.777859 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:57Z","lastTransitionTime":"2025-12-15T05:38:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.880307 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.880346 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.880356 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.880371 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.880381 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:57Z","lastTransitionTime":"2025-12-15T05:38:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.982491 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.982537 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.982549 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.982572 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:57 crc kubenswrapper[4747]: I1215 05:38:57.982585 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:57Z","lastTransitionTime":"2025-12-15T05:38:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:58 crc kubenswrapper[4747]: I1215 05:38:58.084407 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:58 crc kubenswrapper[4747]: I1215 05:38:58.084447 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:58 crc kubenswrapper[4747]: I1215 05:38:58.084458 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:58 crc kubenswrapper[4747]: I1215 05:38:58.084475 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:58 crc kubenswrapper[4747]: I1215 05:38:58.084489 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:58Z","lastTransitionTime":"2025-12-15T05:38:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:58 crc kubenswrapper[4747]: I1215 05:38:58.186099 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:58 crc kubenswrapper[4747]: I1215 05:38:58.186124 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:58 crc kubenswrapper[4747]: I1215 05:38:58.186135 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:58 crc kubenswrapper[4747]: I1215 05:38:58.186165 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:58 crc kubenswrapper[4747]: I1215 05:38:58.186176 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:58Z","lastTransitionTime":"2025-12-15T05:38:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:58 crc kubenswrapper[4747]: I1215 05:38:58.288358 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:58 crc kubenswrapper[4747]: I1215 05:38:58.288397 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:58 crc kubenswrapper[4747]: I1215 05:38:58.288407 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:58 crc kubenswrapper[4747]: I1215 05:38:58.288421 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:58 crc kubenswrapper[4747]: I1215 05:38:58.288432 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:58Z","lastTransitionTime":"2025-12-15T05:38:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:58 crc kubenswrapper[4747]: I1215 05:38:58.390724 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:58 crc kubenswrapper[4747]: I1215 05:38:58.390762 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:58 crc kubenswrapper[4747]: I1215 05:38:58.390771 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:58 crc kubenswrapper[4747]: I1215 05:38:58.390786 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:58 crc kubenswrapper[4747]: I1215 05:38:58.390799 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:58Z","lastTransitionTime":"2025-12-15T05:38:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:58 crc kubenswrapper[4747]: I1215 05:38:58.492379 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:58 crc kubenswrapper[4747]: I1215 05:38:58.492405 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:58 crc kubenswrapper[4747]: I1215 05:38:58.492415 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:58 crc kubenswrapper[4747]: I1215 05:38:58.492425 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:58 crc kubenswrapper[4747]: I1215 05:38:58.492434 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:58Z","lastTransitionTime":"2025-12-15T05:38:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:58 crc kubenswrapper[4747]: I1215 05:38:58.593801 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:58 crc kubenswrapper[4747]: I1215 05:38:58.593841 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:58 crc kubenswrapper[4747]: I1215 05:38:58.593852 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:58 crc kubenswrapper[4747]: I1215 05:38:58.593867 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:58 crc kubenswrapper[4747]: I1215 05:38:58.593876 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:58Z","lastTransitionTime":"2025-12-15T05:38:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:58 crc kubenswrapper[4747]: I1215 05:38:58.628635 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:38:58 crc kubenswrapper[4747]: I1215 05:38:58.628658 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:38:58 crc kubenswrapper[4747]: I1215 05:38:58.628660 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:38:58 crc kubenswrapper[4747]: E1215 05:38:58.628756 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:38:58 crc kubenswrapper[4747]: E1215 05:38:58.628883 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:38:58 crc kubenswrapper[4747]: E1215 05:38:58.629132 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:38:58 crc kubenswrapper[4747]: I1215 05:38:58.695681 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:58 crc kubenswrapper[4747]: I1215 05:38:58.695740 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:58 crc kubenswrapper[4747]: I1215 05:38:58.695753 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:58 crc kubenswrapper[4747]: I1215 05:38:58.695767 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:58 crc kubenswrapper[4747]: I1215 05:38:58.695779 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:58Z","lastTransitionTime":"2025-12-15T05:38:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:58 crc kubenswrapper[4747]: I1215 05:38:58.798155 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:58 crc kubenswrapper[4747]: I1215 05:38:58.798219 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:58 crc kubenswrapper[4747]: I1215 05:38:58.798231 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:58 crc kubenswrapper[4747]: I1215 05:38:58.798251 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:58 crc kubenswrapper[4747]: I1215 05:38:58.798262 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:58Z","lastTransitionTime":"2025-12-15T05:38:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:58 crc kubenswrapper[4747]: I1215 05:38:58.900402 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:58 crc kubenswrapper[4747]: I1215 05:38:58.900436 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:58 crc kubenswrapper[4747]: I1215 05:38:58.900447 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:58 crc kubenswrapper[4747]: I1215 05:38:58.900460 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:58 crc kubenswrapper[4747]: I1215 05:38:58.900467 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:58Z","lastTransitionTime":"2025-12-15T05:38:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:59 crc kubenswrapper[4747]: I1215 05:38:59.002159 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:59 crc kubenswrapper[4747]: I1215 05:38:59.002196 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:59 crc kubenswrapper[4747]: I1215 05:38:59.002210 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:59 crc kubenswrapper[4747]: I1215 05:38:59.002221 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:59 crc kubenswrapper[4747]: I1215 05:38:59.002228 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:59Z","lastTransitionTime":"2025-12-15T05:38:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:59 crc kubenswrapper[4747]: I1215 05:38:59.043840 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 15 05:38:59 crc kubenswrapper[4747]: I1215 05:38:59.043891 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 15 05:38:59 crc kubenswrapper[4747]: I1215 05:38:59.043908 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 15 05:38:59 crc kubenswrapper[4747]: I1215 05:38:59.043953 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 15 05:38:59 crc kubenswrapper[4747]: I1215 05:38:59.043969 4747 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-15T05:38:59Z","lastTransitionTime":"2025-12-15T05:38:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 15 05:38:59 crc kubenswrapper[4747]: I1215 05:38:59.082162 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-xh9dj"] Dec 15 05:38:59 crc kubenswrapper[4747]: I1215 05:38:59.082708 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xh9dj" Dec 15 05:38:59 crc kubenswrapper[4747]: I1215 05:38:59.084132 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 15 05:38:59 crc kubenswrapper[4747]: I1215 05:38:59.084773 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 15 05:38:59 crc kubenswrapper[4747]: I1215 05:38:59.085214 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 15 05:38:59 crc kubenswrapper[4747]: I1215 05:38:59.085993 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 15 05:38:59 crc kubenswrapper[4747]: I1215 05:38:59.122126 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e913e2f9-a791-4aa3-8bd7-acbc57e1e7a3-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xh9dj\" (UID: \"e913e2f9-a791-4aa3-8bd7-acbc57e1e7a3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xh9dj" Dec 15 05:38:59 crc kubenswrapper[4747]: I1215 05:38:59.122162 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e913e2f9-a791-4aa3-8bd7-acbc57e1e7a3-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xh9dj\" (UID: \"e913e2f9-a791-4aa3-8bd7-acbc57e1e7a3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xh9dj" Dec 15 05:38:59 crc kubenswrapper[4747]: I1215 05:38:59.122187 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e913e2f9-a791-4aa3-8bd7-acbc57e1e7a3-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xh9dj\" (UID: \"e913e2f9-a791-4aa3-8bd7-acbc57e1e7a3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xh9dj" Dec 15 05:38:59 crc kubenswrapper[4747]: I1215 05:38:59.122212 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e913e2f9-a791-4aa3-8bd7-acbc57e1e7a3-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xh9dj\" (UID: \"e913e2f9-a791-4aa3-8bd7-acbc57e1e7a3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xh9dj" Dec 15 05:38:59 crc kubenswrapper[4747]: I1215 05:38:59.122335 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e913e2f9-a791-4aa3-8bd7-acbc57e1e7a3-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xh9dj\" (UID: \"e913e2f9-a791-4aa3-8bd7-acbc57e1e7a3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xh9dj" Dec 15 05:38:59 crc kubenswrapper[4747]: I1215 05:38:59.223304 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e913e2f9-a791-4aa3-8bd7-acbc57e1e7a3-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xh9dj\" (UID: \"e913e2f9-a791-4aa3-8bd7-acbc57e1e7a3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xh9dj" Dec 15 05:38:59 crc kubenswrapper[4747]: I1215 05:38:59.223340 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e913e2f9-a791-4aa3-8bd7-acbc57e1e7a3-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xh9dj\" (UID: \"e913e2f9-a791-4aa3-8bd7-acbc57e1e7a3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xh9dj" Dec 15 05:38:59 crc kubenswrapper[4747]: I1215 05:38:59.223362 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e913e2f9-a791-4aa3-8bd7-acbc57e1e7a3-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xh9dj\" (UID: \"e913e2f9-a791-4aa3-8bd7-acbc57e1e7a3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xh9dj" Dec 15 05:38:59 crc kubenswrapper[4747]: I1215 05:38:59.223385 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e913e2f9-a791-4aa3-8bd7-acbc57e1e7a3-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xh9dj\" (UID: \"e913e2f9-a791-4aa3-8bd7-acbc57e1e7a3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xh9dj" Dec 15 05:38:59 crc kubenswrapper[4747]: I1215 05:38:59.223443 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e913e2f9-a791-4aa3-8bd7-acbc57e1e7a3-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xh9dj\" (UID: \"e913e2f9-a791-4aa3-8bd7-acbc57e1e7a3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xh9dj" Dec 15 05:38:59 crc kubenswrapper[4747]: I1215 05:38:59.223451 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e913e2f9-a791-4aa3-8bd7-acbc57e1e7a3-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xh9dj\" (UID: \"e913e2f9-a791-4aa3-8bd7-acbc57e1e7a3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xh9dj" Dec 15 05:38:59 crc kubenswrapper[4747]: I1215 05:38:59.223511 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e913e2f9-a791-4aa3-8bd7-acbc57e1e7a3-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xh9dj\" (UID: \"e913e2f9-a791-4aa3-8bd7-acbc57e1e7a3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xh9dj" Dec 15 05:38:59 crc kubenswrapper[4747]: I1215 05:38:59.224378 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e913e2f9-a791-4aa3-8bd7-acbc57e1e7a3-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xh9dj\" (UID: \"e913e2f9-a791-4aa3-8bd7-acbc57e1e7a3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xh9dj" Dec 15 05:38:59 crc kubenswrapper[4747]: I1215 05:38:59.232485 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e913e2f9-a791-4aa3-8bd7-acbc57e1e7a3-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xh9dj\" (UID: \"e913e2f9-a791-4aa3-8bd7-acbc57e1e7a3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xh9dj" Dec 15 05:38:59 crc kubenswrapper[4747]: I1215 05:38:59.237865 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e913e2f9-a791-4aa3-8bd7-acbc57e1e7a3-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xh9dj\" (UID: \"e913e2f9-a791-4aa3-8bd7-acbc57e1e7a3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xh9dj" Dec 15 05:38:59 crc kubenswrapper[4747]: I1215 05:38:59.398068 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xh9dj" Dec 15 05:38:59 crc kubenswrapper[4747]: W1215 05:38:59.412325 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode913e2f9_a791_4aa3_8bd7_acbc57e1e7a3.slice/crio-52c3baf4af7d284ff6651a10bf3565a90e3909a6499f40916f236536d2c35081 WatchSource:0}: Error finding container 52c3baf4af7d284ff6651a10bf3565a90e3909a6499f40916f236536d2c35081: Status 404 returned error can't find the container with id 52c3baf4af7d284ff6651a10bf3565a90e3909a6499f40916f236536d2c35081 Dec 15 05:38:59 crc kubenswrapper[4747]: I1215 05:38:59.631360 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:38:59 crc kubenswrapper[4747]: E1215 05:38:59.631599 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:39:00 crc kubenswrapper[4747]: I1215 05:39:00.081293 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xh9dj" event={"ID":"e913e2f9-a791-4aa3-8bd7-acbc57e1e7a3","Type":"ContainerStarted","Data":"76f07c7aa1fc570e18b4c7d32831b8626b3cf4c6db923f9e2a7b5bc15b48dbe2"} Dec 15 05:39:00 crc kubenswrapper[4747]: I1215 05:39:00.081348 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xh9dj" event={"ID":"e913e2f9-a791-4aa3-8bd7-acbc57e1e7a3","Type":"ContainerStarted","Data":"52c3baf4af7d284ff6651a10bf3565a90e3909a6499f40916f236536d2c35081"} Dec 15 05:39:00 crc kubenswrapper[4747]: I1215 05:39:00.628763 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:39:00 crc kubenswrapper[4747]: I1215 05:39:00.628786 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:39:00 crc kubenswrapper[4747]: I1215 05:39:00.628819 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:39:00 crc kubenswrapper[4747]: E1215 05:39:00.629255 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:39:00 crc kubenswrapper[4747]: E1215 05:39:00.629055 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:39:00 crc kubenswrapper[4747]: E1215 05:39:00.629351 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:39:01 crc kubenswrapper[4747]: I1215 05:39:01.628648 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:39:01 crc kubenswrapper[4747]: E1215 05:39:01.629361 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:39:01 crc kubenswrapper[4747]: I1215 05:39:01.629788 4747 scope.go:117] "RemoveContainer" containerID="312af0941768922b2c2c313a7209f9271ef4df55fe0be3baa0ebc0d91637b955" Dec 15 05:39:01 crc kubenswrapper[4747]: E1215 05:39:01.630032 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-82lhw_openshift-ovn-kubernetes(2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" Dec 15 05:39:02 crc kubenswrapper[4747]: I1215 05:39:02.628779 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:39:02 crc kubenswrapper[4747]: I1215 05:39:02.628859 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:39:02 crc kubenswrapper[4747]: I1215 05:39:02.628915 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:39:02 crc kubenswrapper[4747]: E1215 05:39:02.629077 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:39:02 crc kubenswrapper[4747]: E1215 05:39:02.629304 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:39:02 crc kubenswrapper[4747]: E1215 05:39:02.629498 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:39:03 crc kubenswrapper[4747]: I1215 05:39:03.628636 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:39:03 crc kubenswrapper[4747]: E1215 05:39:03.629047 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:39:04 crc kubenswrapper[4747]: I1215 05:39:04.628591 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:39:04 crc kubenswrapper[4747]: I1215 05:39:04.628653 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:39:04 crc kubenswrapper[4747]: E1215 05:39:04.628680 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:39:04 crc kubenswrapper[4747]: I1215 05:39:04.628713 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:39:04 crc kubenswrapper[4747]: E1215 05:39:04.628769 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:39:04 crc kubenswrapper[4747]: E1215 05:39:04.628787 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:39:05 crc kubenswrapper[4747]: I1215 05:39:05.628194 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:39:05 crc kubenswrapper[4747]: E1215 05:39:05.628498 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:39:06 crc kubenswrapper[4747]: I1215 05:39:06.629164 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:39:06 crc kubenswrapper[4747]: I1215 05:39:06.629194 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:39:06 crc kubenswrapper[4747]: I1215 05:39:06.629242 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:39:06 crc kubenswrapper[4747]: E1215 05:39:06.630074 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:39:06 crc kubenswrapper[4747]: E1215 05:39:06.630148 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:39:06 crc kubenswrapper[4747]: E1215 05:39:06.630209 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:39:07 crc kubenswrapper[4747]: I1215 05:39:07.628674 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:39:07 crc kubenswrapper[4747]: E1215 05:39:07.628786 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:39:08 crc kubenswrapper[4747]: I1215 05:39:08.100765 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gmfps_89350c5d-9a77-499e-81ec-376b012cc219/kube-multus/1.log" Dec 15 05:39:08 crc kubenswrapper[4747]: I1215 05:39:08.101205 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gmfps_89350c5d-9a77-499e-81ec-376b012cc219/kube-multus/0.log" Dec 15 05:39:08 crc kubenswrapper[4747]: I1215 05:39:08.101241 4747 generic.go:334] "Generic (PLEG): container finished" podID="89350c5d-9a77-499e-81ec-376b012cc219" containerID="bf7e29913438085594b529ef0499bebcb5d59f0027e5c46d493eb0316c2c553c" exitCode=1 Dec 15 05:39:08 crc kubenswrapper[4747]: I1215 05:39:08.101266 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gmfps" event={"ID":"89350c5d-9a77-499e-81ec-376b012cc219","Type":"ContainerDied","Data":"bf7e29913438085594b529ef0499bebcb5d59f0027e5c46d493eb0316c2c553c"} Dec 15 05:39:08 crc kubenswrapper[4747]: I1215 05:39:08.101294 4747 scope.go:117] "RemoveContainer" containerID="31dada17d2bfd9fb0e40e0e67d3d41379a6dabe9f1a7db2b2367a66d120d7e0d" Dec 15 05:39:08 crc kubenswrapper[4747]: I1215 05:39:08.101562 4747 scope.go:117] "RemoveContainer" containerID="bf7e29913438085594b529ef0499bebcb5d59f0027e5c46d493eb0316c2c553c" Dec 15 05:39:08 crc kubenswrapper[4747]: E1215 05:39:08.101699 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-gmfps_openshift-multus(89350c5d-9a77-499e-81ec-376b012cc219)\"" pod="openshift-multus/multus-gmfps" podUID="89350c5d-9a77-499e-81ec-376b012cc219" Dec 15 05:39:08 crc kubenswrapper[4747]: I1215 05:39:08.116176 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xh9dj" podStartSLOduration=94.116161604 podStartE2EDuration="1m34.116161604s" podCreationTimestamp="2025-12-15 05:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:39:00.096749924 +0000 UTC m=+103.793261840" watchObservedRunningTime="2025-12-15 05:39:08.116161604 +0000 UTC m=+111.812673521" Dec 15 05:39:08 crc kubenswrapper[4747]: I1215 05:39:08.628465 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:39:08 crc kubenswrapper[4747]: I1215 05:39:08.628517 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:39:08 crc kubenswrapper[4747]: I1215 05:39:08.628520 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:39:08 crc kubenswrapper[4747]: E1215 05:39:08.628609 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:39:08 crc kubenswrapper[4747]: E1215 05:39:08.628661 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:39:08 crc kubenswrapper[4747]: E1215 05:39:08.628734 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:39:09 crc kubenswrapper[4747]: I1215 05:39:09.105486 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gmfps_89350c5d-9a77-499e-81ec-376b012cc219/kube-multus/1.log" Dec 15 05:39:09 crc kubenswrapper[4747]: I1215 05:39:09.628513 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:39:09 crc kubenswrapper[4747]: E1215 05:39:09.628603 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:39:10 crc kubenswrapper[4747]: I1215 05:39:10.628987 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:39:10 crc kubenswrapper[4747]: E1215 05:39:10.629111 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:39:10 crc kubenswrapper[4747]: I1215 05:39:10.629306 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:39:10 crc kubenswrapper[4747]: E1215 05:39:10.629380 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:39:10 crc kubenswrapper[4747]: I1215 05:39:10.629650 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:39:10 crc kubenswrapper[4747]: E1215 05:39:10.629793 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:39:11 crc kubenswrapper[4747]: I1215 05:39:11.628695 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:39:11 crc kubenswrapper[4747]: E1215 05:39:11.628864 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:39:12 crc kubenswrapper[4747]: I1215 05:39:12.628602 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:39:12 crc kubenswrapper[4747]: E1215 05:39:12.628728 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:39:12 crc kubenswrapper[4747]: I1215 05:39:12.628976 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:39:12 crc kubenswrapper[4747]: I1215 05:39:12.629452 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:39:12 crc kubenswrapper[4747]: E1215 05:39:12.629496 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:39:12 crc kubenswrapper[4747]: E1215 05:39:12.629641 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:39:12 crc kubenswrapper[4747]: I1215 05:39:12.629908 4747 scope.go:117] "RemoveContainer" containerID="312af0941768922b2c2c313a7209f9271ef4df55fe0be3baa0ebc0d91637b955" Dec 15 05:39:13 crc kubenswrapper[4747]: I1215 05:39:13.116853 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-82lhw_2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7/ovnkube-controller/3.log" Dec 15 05:39:13 crc kubenswrapper[4747]: I1215 05:39:13.119251 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" event={"ID":"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7","Type":"ContainerStarted","Data":"5c050824b70702ea79444e67ed807831c7d2bc246cfc813af6d50191cf4bd56d"} Dec 15 05:39:13 crc kubenswrapper[4747]: I1215 05:39:13.120161 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:39:13 crc kubenswrapper[4747]: I1215 05:39:13.140854 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" podStartSLOduration=99.140838871 podStartE2EDuration="1m39.140838871s" podCreationTimestamp="2025-12-15 05:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:39:13.139739897 +0000 UTC m=+116.836251814" watchObservedRunningTime="2025-12-15 05:39:13.140838871 +0000 UTC m=+116.837350778" Dec 15 05:39:13 crc kubenswrapper[4747]: I1215 05:39:13.277438 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-4nn8g"] Dec 15 05:39:13 crc kubenswrapper[4747]: I1215 05:39:13.277535 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:39:13 crc kubenswrapper[4747]: E1215 05:39:13.277622 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:39:14 crc kubenswrapper[4747]: I1215 05:39:14.628371 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:39:14 crc kubenswrapper[4747]: I1215 05:39:14.628420 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:39:14 crc kubenswrapper[4747]: I1215 05:39:14.628445 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:39:14 crc kubenswrapper[4747]: I1215 05:39:14.628380 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:39:14 crc kubenswrapper[4747]: E1215 05:39:14.628545 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:39:14 crc kubenswrapper[4747]: E1215 05:39:14.628488 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:39:14 crc kubenswrapper[4747]: E1215 05:39:14.628691 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:39:14 crc kubenswrapper[4747]: E1215 05:39:14.628712 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:39:16 crc kubenswrapper[4747]: I1215 05:39:16.629185 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:39:16 crc kubenswrapper[4747]: I1215 05:39:16.629195 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:39:16 crc kubenswrapper[4747]: I1215 05:39:16.629243 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:39:16 crc kubenswrapper[4747]: I1215 05:39:16.629446 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:39:16 crc kubenswrapper[4747]: E1215 05:39:16.629513 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:39:16 crc kubenswrapper[4747]: E1215 05:39:16.629553 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:39:16 crc kubenswrapper[4747]: E1215 05:39:16.629591 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:39:16 crc kubenswrapper[4747]: E1215 05:39:16.629641 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:39:16 crc kubenswrapper[4747]: E1215 05:39:16.661024 4747 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 15 05:39:16 crc kubenswrapper[4747]: E1215 05:39:16.705860 4747 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 15 05:39:18 crc kubenswrapper[4747]: I1215 05:39:18.628712 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:39:18 crc kubenswrapper[4747]: I1215 05:39:18.628748 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:39:18 crc kubenswrapper[4747]: E1215 05:39:18.628813 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:39:18 crc kubenswrapper[4747]: I1215 05:39:18.628718 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:39:18 crc kubenswrapper[4747]: E1215 05:39:18.628888 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:39:18 crc kubenswrapper[4747]: E1215 05:39:18.628997 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:39:18 crc kubenswrapper[4747]: I1215 05:39:18.629017 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:39:18 crc kubenswrapper[4747]: E1215 05:39:18.629088 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:39:20 crc kubenswrapper[4747]: I1215 05:39:20.628288 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:39:20 crc kubenswrapper[4747]: I1215 05:39:20.628343 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:39:20 crc kubenswrapper[4747]: I1215 05:39:20.628345 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:39:20 crc kubenswrapper[4747]: I1215 05:39:20.628407 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:39:20 crc kubenswrapper[4747]: E1215 05:39:20.628406 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:39:20 crc kubenswrapper[4747]: E1215 05:39:20.628490 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:39:20 crc kubenswrapper[4747]: E1215 05:39:20.628564 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:39:20 crc kubenswrapper[4747]: E1215 05:39:20.628660 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:39:21 crc kubenswrapper[4747]: E1215 05:39:21.707774 4747 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 15 05:39:22 crc kubenswrapper[4747]: I1215 05:39:22.628591 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:39:22 crc kubenswrapper[4747]: I1215 05:39:22.628640 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:39:22 crc kubenswrapper[4747]: I1215 05:39:22.628807 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:39:22 crc kubenswrapper[4747]: E1215 05:39:22.628834 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:39:22 crc kubenswrapper[4747]: I1215 05:39:22.628859 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:39:22 crc kubenswrapper[4747]: E1215 05:39:22.629015 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:39:22 crc kubenswrapper[4747]: E1215 05:39:22.629145 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:39:22 crc kubenswrapper[4747]: E1215 05:39:22.629193 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:39:22 crc kubenswrapper[4747]: I1215 05:39:22.629337 4747 scope.go:117] "RemoveContainer" containerID="bf7e29913438085594b529ef0499bebcb5d59f0027e5c46d493eb0316c2c553c" Dec 15 05:39:23 crc kubenswrapper[4747]: I1215 05:39:23.152827 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gmfps_89350c5d-9a77-499e-81ec-376b012cc219/kube-multus/1.log" Dec 15 05:39:23 crc kubenswrapper[4747]: I1215 05:39:23.152890 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gmfps" event={"ID":"89350c5d-9a77-499e-81ec-376b012cc219","Type":"ContainerStarted","Data":"eb1f5c773253872e7b72eb3d6d8dfb1affde066a8618f8d9fe96d1cb3254c5e1"} Dec 15 05:39:24 crc kubenswrapper[4747]: I1215 05:39:24.628520 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:39:24 crc kubenswrapper[4747]: I1215 05:39:24.628573 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:39:24 crc kubenswrapper[4747]: I1215 05:39:24.628718 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:39:24 crc kubenswrapper[4747]: E1215 05:39:24.628706 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:39:24 crc kubenswrapper[4747]: I1215 05:39:24.628756 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:39:24 crc kubenswrapper[4747]: E1215 05:39:24.628841 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:39:24 crc kubenswrapper[4747]: E1215 05:39:24.628893 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:39:24 crc kubenswrapper[4747]: E1215 05:39:24.628977 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:39:26 crc kubenswrapper[4747]: I1215 05:39:26.628197 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:39:26 crc kubenswrapper[4747]: I1215 05:39:26.628278 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:39:26 crc kubenswrapper[4747]: I1215 05:39:26.628293 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:39:26 crc kubenswrapper[4747]: I1215 05:39:26.628324 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:39:26 crc kubenswrapper[4747]: E1215 05:39:26.629330 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 15 05:39:26 crc kubenswrapper[4747]: E1215 05:39:26.629501 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 15 05:39:26 crc kubenswrapper[4747]: E1215 05:39:26.629580 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4nn8g" podUID="fca0b2d2-cd19-409a-aa6d-df8b295adf62" Dec 15 05:39:26 crc kubenswrapper[4747]: E1215 05:39:26.629667 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 15 05:39:28 crc kubenswrapper[4747]: I1215 05:39:28.629038 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:39:28 crc kubenswrapper[4747]: I1215 05:39:28.629093 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:39:28 crc kubenswrapper[4747]: I1215 05:39:28.629152 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:39:28 crc kubenswrapper[4747]: I1215 05:39:28.629313 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:39:28 crc kubenswrapper[4747]: I1215 05:39:28.630703 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 15 05:39:28 crc kubenswrapper[4747]: I1215 05:39:28.631085 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 15 05:39:28 crc kubenswrapper[4747]: I1215 05:39:28.631102 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 15 05:39:28 crc kubenswrapper[4747]: I1215 05:39:28.631525 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 15 05:39:28 crc kubenswrapper[4747]: I1215 05:39:28.631534 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 15 05:39:28 crc kubenswrapper[4747]: I1215 05:39:28.631852 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.194664 4747 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.218250 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-slnb9"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.218604 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cjc2b"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.218854 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cjc2b" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.219161 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-slnb9" Dec 15 05:39:29 crc kubenswrapper[4747]: W1215 05:39:29.220139 4747 reflector.go:561] object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Dec 15 05:39:29 crc kubenswrapper[4747]: E1215 05:39:29.220171 4747 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 15 05:39:29 crc kubenswrapper[4747]: W1215 05:39:29.220655 4747 reflector.go:561] object-"openshift-controller-manager"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Dec 15 05:39:29 crc kubenswrapper[4747]: E1215 05:39:29.220671 4747 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.221228 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-ml4rr"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.221608 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-ml4rr" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.221962 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-g46rv"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.222234 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-g46rv" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.222756 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rcnrf"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.223024 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rcnrf" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.223388 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-5jmx6"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.223608 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-5jmx6" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.223695 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-76d4n"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.224092 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-76d4n" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.224741 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-v472l"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.225001 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-v472l" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.227960 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.227973 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.228126 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.228174 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.228375 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.228524 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.228569 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.228644 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.228699 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.228839 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.228996 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.229122 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.229403 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.229508 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.229757 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.230095 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.230124 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.230945 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-75dh6"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.231251 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.231383 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.231402 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-75dh6" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.231952 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.232464 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-2sdgk"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.232770 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-2sdgk" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.238789 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.245222 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.245433 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.246353 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.247040 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.247167 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.249257 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.249610 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.249797 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.249889 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.250002 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.250352 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.250489 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.250597 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.250622 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.250655 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.250717 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.250773 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.250784 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.250799 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.250808 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.250878 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.250961 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.250992 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.251024 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.251044 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.251088 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.251097 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.251118 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.250967 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.251180 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.251204 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.251087 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.250996 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.251262 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.251180 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.251332 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.251338 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.251224 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.251406 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.251434 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.268231 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.268507 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.270163 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bddlq"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.270898 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-q6zkl"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.271173 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6zbfg"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.271571 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zbfg" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.272093 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-q6zkl" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.275408 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.276025 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-pkbhr"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.276528 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lzg4l"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.277093 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-pkbhr" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.277200 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bddlq" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.277267 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.284893 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.287122 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.288218 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdbrg\" (UniqueName: \"kubernetes.io/projected/fd50242e-74be-4e24-9e3c-121196f60867-kube-api-access-zdbrg\") pod \"controller-manager-879f6c89f-cjc2b\" (UID: \"fd50242e-74be-4e24-9e3c-121196f60867\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cjc2b" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.288251 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5ec883a6-2265-4c56-97f1-98cd4a3aa084-audit\") pod \"apiserver-76f77b778f-ml4rr\" (UID: \"5ec883a6-2265-4c56-97f1-98cd4a3aa084\") " pod="openshift-apiserver/apiserver-76f77b778f-ml4rr" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.288274 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ec883a6-2265-4c56-97f1-98cd4a3aa084-trusted-ca-bundle\") pod \"apiserver-76f77b778f-ml4rr\" (UID: \"5ec883a6-2265-4c56-97f1-98cd4a3aa084\") " pod="openshift-apiserver/apiserver-76f77b778f-ml4rr" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.288305 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/efb2d24d-c4bc-4f9a-ae33-a7f3e6090a39-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-g46rv\" (UID: \"efb2d24d-c4bc-4f9a-ae33-a7f3e6090a39\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g46rv" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.288326 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5ec883a6-2265-4c56-97f1-98cd4a3aa084-audit-dir\") pod \"apiserver-76f77b778f-ml4rr\" (UID: \"5ec883a6-2265-4c56-97f1-98cd4a3aa084\") " pod="openshift-apiserver/apiserver-76f77b778f-ml4rr" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.288344 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3d15d6e-b312-4d38-9720-46d211e795f6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-rcnrf\" (UID: \"b3d15d6e-b312-4d38-9720-46d211e795f6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rcnrf" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.288361 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ec883a6-2265-4c56-97f1-98cd4a3aa084-config\") pod \"apiserver-76f77b778f-ml4rr\" (UID: \"5ec883a6-2265-4c56-97f1-98cd4a3aa084\") " pod="openshift-apiserver/apiserver-76f77b778f-ml4rr" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.288375 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cx6d\" (UniqueName: \"kubernetes.io/projected/efb2d24d-c4bc-4f9a-ae33-a7f3e6090a39-kube-api-access-8cx6d\") pod \"machine-api-operator-5694c8668f-g46rv\" (UID: \"efb2d24d-c4bc-4f9a-ae33-a7f3e6090a39\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g46rv" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.288390 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drg45\" (UniqueName: \"kubernetes.io/projected/5ec883a6-2265-4c56-97f1-98cd4a3aa084-kube-api-access-drg45\") pod \"apiserver-76f77b778f-ml4rr\" (UID: \"5ec883a6-2265-4c56-97f1-98cd4a3aa084\") " pod="openshift-apiserver/apiserver-76f77b778f-ml4rr" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.288422 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd50242e-74be-4e24-9e3c-121196f60867-serving-cert\") pod \"controller-manager-879f6c89f-cjc2b\" (UID: \"fd50242e-74be-4e24-9e3c-121196f60867\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cjc2b" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.288439 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd50242e-74be-4e24-9e3c-121196f60867-config\") pod \"controller-manager-879f6c89f-cjc2b\" (UID: \"fd50242e-74be-4e24-9e3c-121196f60867\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cjc2b" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.288459 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5ec883a6-2265-4c56-97f1-98cd4a3aa084-encryption-config\") pod \"apiserver-76f77b778f-ml4rr\" (UID: \"5ec883a6-2265-4c56-97f1-98cd4a3aa084\") " pod="openshift-apiserver/apiserver-76f77b778f-ml4rr" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.288483 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5ec883a6-2265-4c56-97f1-98cd4a3aa084-node-pullsecrets\") pod \"apiserver-76f77b778f-ml4rr\" (UID: \"5ec883a6-2265-4c56-97f1-98cd4a3aa084\") " pod="openshift-apiserver/apiserver-76f77b778f-ml4rr" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.288500 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd50242e-74be-4e24-9e3c-121196f60867-client-ca\") pod \"controller-manager-879f6c89f-cjc2b\" (UID: \"fd50242e-74be-4e24-9e3c-121196f60867\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cjc2b" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.288558 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fd50242e-74be-4e24-9e3c-121196f60867-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-cjc2b\" (UID: \"fd50242e-74be-4e24-9e3c-121196f60867\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cjc2b" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.288577 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.288595 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/efb2d24d-c4bc-4f9a-ae33-a7f3e6090a39-images\") pod \"machine-api-operator-5694c8668f-g46rv\" (UID: \"efb2d24d-c4bc-4f9a-ae33-a7f3e6090a39\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g46rv" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.288619 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v65b8\" (UniqueName: \"kubernetes.io/projected/e4b0569a-a2ad-445f-ba73-bec2508e5c0b-kube-api-access-v65b8\") pod \"machine-approver-56656f9798-slnb9\" (UID: \"e4b0569a-a2ad-445f-ba73-bec2508e5c0b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-slnb9" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.288651 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ec883a6-2265-4c56-97f1-98cd4a3aa084-serving-cert\") pod \"apiserver-76f77b778f-ml4rr\" (UID: \"5ec883a6-2265-4c56-97f1-98cd4a3aa084\") " pod="openshift-apiserver/apiserver-76f77b778f-ml4rr" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.288685 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5ec883a6-2265-4c56-97f1-98cd4a3aa084-etcd-client\") pod \"apiserver-76f77b778f-ml4rr\" (UID: \"5ec883a6-2265-4c56-97f1-98cd4a3aa084\") " pod="openshift-apiserver/apiserver-76f77b778f-ml4rr" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.288718 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e4b0569a-a2ad-445f-ba73-bec2508e5c0b-machine-approver-tls\") pod \"machine-approver-56656f9798-slnb9\" (UID: \"e4b0569a-a2ad-445f-ba73-bec2508e5c0b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-slnb9" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.288799 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8qqx\" (UniqueName: \"kubernetes.io/projected/b3d15d6e-b312-4d38-9720-46d211e795f6-kube-api-access-x8qqx\") pod \"openshift-apiserver-operator-796bbdcf4f-rcnrf\" (UID: \"b3d15d6e-b312-4d38-9720-46d211e795f6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rcnrf" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.288864 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e4b0569a-a2ad-445f-ba73-bec2508e5c0b-auth-proxy-config\") pod \"machine-approver-56656f9798-slnb9\" (UID: \"e4b0569a-a2ad-445f-ba73-bec2508e5c0b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-slnb9" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.288889 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5ec883a6-2265-4c56-97f1-98cd4a3aa084-etcd-serving-ca\") pod \"apiserver-76f77b778f-ml4rr\" (UID: \"5ec883a6-2265-4c56-97f1-98cd4a3aa084\") " pod="openshift-apiserver/apiserver-76f77b778f-ml4rr" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.288907 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5ec883a6-2265-4c56-97f1-98cd4a3aa084-image-import-ca\") pod \"apiserver-76f77b778f-ml4rr\" (UID: \"5ec883a6-2265-4c56-97f1-98cd4a3aa084\") " pod="openshift-apiserver/apiserver-76f77b778f-ml4rr" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.288949 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3d15d6e-b312-4d38-9720-46d211e795f6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-rcnrf\" (UID: \"b3d15d6e-b312-4d38-9720-46d211e795f6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rcnrf" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.288985 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4b0569a-a2ad-445f-ba73-bec2508e5c0b-config\") pod \"machine-approver-56656f9798-slnb9\" (UID: \"e4b0569a-a2ad-445f-ba73-bec2508e5c0b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-slnb9" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.289006 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efb2d24d-c4bc-4f9a-ae33-a7f3e6090a39-config\") pod \"machine-api-operator-5694c8668f-g46rv\" (UID: \"efb2d24d-c4bc-4f9a-ae33-a7f3e6090a39\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g46rv" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.289354 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.289864 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.290664 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w46n4"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.291275 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w46n4" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.292387 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-gr6p7"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.292777 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-clqdc"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.293104 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-clqdc" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.293265 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gr6p7" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.293366 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-gvbxq"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.294188 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-gvbxq" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.296256 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-snk8n"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.296442 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.296651 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-wzmz8"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.297221 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.297261 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.297546 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.297580 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.297750 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.297830 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.298036 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.298147 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.298389 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-snk8n" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.298794 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.298884 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.299089 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.314517 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.314727 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.314892 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.315000 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.315134 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.315234 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.315317 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.315533 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.315575 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.317372 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mpvdj"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.318690 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.319068 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-wzmz8" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.319243 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.319532 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.319939 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.325381 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.327641 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.328768 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7g65v"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.329222 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tpvkm"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.329559 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mpvdj" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.329743 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xkbmm"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.329876 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tpvkm" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.329747 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7g65v" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.330170 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzkjc"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.330529 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29429610-bgnz8"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.330589 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.330609 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xkbmm" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.330594 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzkjc" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.331266 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-xlgbx"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.331657 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.331719 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xlgbx" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.332061 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29429610-bgnz8" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.332354 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-mxdtl"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.332955 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-czrvw"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.333373 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-czrvw" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.333518 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.333548 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-jx62d"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.333902 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.334184 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mxdtl" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.335573 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-45bgn"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.335746 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-jx62d" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.335983 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-45bgn" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.339029 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mvljn"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.339641 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mvljn" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.339901 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q24qk"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.341846 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q24qk" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.342625 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mc527"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.342757 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.343220 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mc527" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.345055 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-jnf5v"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.347590 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t44jd"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.348010 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jnf5v" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.348150 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t44jd" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.349457 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dmhzm"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.350317 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rcnrf"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.350402 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dmhzm" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.353067 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-pkbhr"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.354344 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bddlq"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.355690 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8phbj"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.357117 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-8phbj" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.358052 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29429610-bgnz8"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.360122 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7g65v"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.360434 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.362054 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-gr6p7"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.363942 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-75dh6"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.365336 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-ml4rr"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.366613 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-76d4n"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.368443 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-5jmx6"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.369212 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-snk8n"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.370378 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lzg4l"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.371994 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-czrvw"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.373731 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-v472l"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.375199 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6zbfg"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.376528 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mpvdj"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.384765 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.384802 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzkjc"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.388247 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-g46rv"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.389349 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tpvkm"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.389617 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/efb2d24d-c4bc-4f9a-ae33-a7f3e6090a39-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-g46rv\" (UID: \"efb2d24d-c4bc-4f9a-ae33-a7f3e6090a39\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g46rv" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.389669 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5ec883a6-2265-4c56-97f1-98cd4a3aa084-audit-dir\") pod \"apiserver-76f77b778f-ml4rr\" (UID: \"5ec883a6-2265-4c56-97f1-98cd4a3aa084\") " pod="openshift-apiserver/apiserver-76f77b778f-ml4rr" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.389696 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3d15d6e-b312-4d38-9720-46d211e795f6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-rcnrf\" (UID: \"b3d15d6e-b312-4d38-9720-46d211e795f6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rcnrf" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.389717 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ec883a6-2265-4c56-97f1-98cd4a3aa084-config\") pod \"apiserver-76f77b778f-ml4rr\" (UID: \"5ec883a6-2265-4c56-97f1-98cd4a3aa084\") " pod="openshift-apiserver/apiserver-76f77b778f-ml4rr" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.389734 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cx6d\" (UniqueName: \"kubernetes.io/projected/efb2d24d-c4bc-4f9a-ae33-a7f3e6090a39-kube-api-access-8cx6d\") pod \"machine-api-operator-5694c8668f-g46rv\" (UID: \"efb2d24d-c4bc-4f9a-ae33-a7f3e6090a39\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g46rv" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.389755 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drg45\" (UniqueName: \"kubernetes.io/projected/5ec883a6-2265-4c56-97f1-98cd4a3aa084-kube-api-access-drg45\") pod \"apiserver-76f77b778f-ml4rr\" (UID: \"5ec883a6-2265-4c56-97f1-98cd4a3aa084\") " pod="openshift-apiserver/apiserver-76f77b778f-ml4rr" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.389789 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd50242e-74be-4e24-9e3c-121196f60867-serving-cert\") pod \"controller-manager-879f6c89f-cjc2b\" (UID: \"fd50242e-74be-4e24-9e3c-121196f60867\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cjc2b" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.389808 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd50242e-74be-4e24-9e3c-121196f60867-config\") pod \"controller-manager-879f6c89f-cjc2b\" (UID: \"fd50242e-74be-4e24-9e3c-121196f60867\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cjc2b" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.389829 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5ec883a6-2265-4c56-97f1-98cd4a3aa084-encryption-config\") pod \"apiserver-76f77b778f-ml4rr\" (UID: \"5ec883a6-2265-4c56-97f1-98cd4a3aa084\") " pod="openshift-apiserver/apiserver-76f77b778f-ml4rr" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.389855 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5ec883a6-2265-4c56-97f1-98cd4a3aa084-node-pullsecrets\") pod \"apiserver-76f77b778f-ml4rr\" (UID: \"5ec883a6-2265-4c56-97f1-98cd4a3aa084\") " pod="openshift-apiserver/apiserver-76f77b778f-ml4rr" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.389872 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd50242e-74be-4e24-9e3c-121196f60867-client-ca\") pod \"controller-manager-879f6c89f-cjc2b\" (UID: \"fd50242e-74be-4e24-9e3c-121196f60867\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cjc2b" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.389905 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fd50242e-74be-4e24-9e3c-121196f60867-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-cjc2b\" (UID: \"fd50242e-74be-4e24-9e3c-121196f60867\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cjc2b" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.389945 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/efb2d24d-c4bc-4f9a-ae33-a7f3e6090a39-images\") pod \"machine-api-operator-5694c8668f-g46rv\" (UID: \"efb2d24d-c4bc-4f9a-ae33-a7f3e6090a39\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g46rv" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.389966 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v65b8\" (UniqueName: \"kubernetes.io/projected/e4b0569a-a2ad-445f-ba73-bec2508e5c0b-kube-api-access-v65b8\") pod \"machine-approver-56656f9798-slnb9\" (UID: \"e4b0569a-a2ad-445f-ba73-bec2508e5c0b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-slnb9" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.389982 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ec883a6-2265-4c56-97f1-98cd4a3aa084-serving-cert\") pod \"apiserver-76f77b778f-ml4rr\" (UID: \"5ec883a6-2265-4c56-97f1-98cd4a3aa084\") " pod="openshift-apiserver/apiserver-76f77b778f-ml4rr" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.390002 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5ec883a6-2265-4c56-97f1-98cd4a3aa084-etcd-client\") pod \"apiserver-76f77b778f-ml4rr\" (UID: \"5ec883a6-2265-4c56-97f1-98cd4a3aa084\") " pod="openshift-apiserver/apiserver-76f77b778f-ml4rr" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.390029 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e4b0569a-a2ad-445f-ba73-bec2508e5c0b-machine-approver-tls\") pod \"machine-approver-56656f9798-slnb9\" (UID: \"e4b0569a-a2ad-445f-ba73-bec2508e5c0b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-slnb9" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.390041 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5ec883a6-2265-4c56-97f1-98cd4a3aa084-node-pullsecrets\") pod \"apiserver-76f77b778f-ml4rr\" (UID: \"5ec883a6-2265-4c56-97f1-98cd4a3aa084\") " pod="openshift-apiserver/apiserver-76f77b778f-ml4rr" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.390054 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8qqx\" (UniqueName: \"kubernetes.io/projected/b3d15d6e-b312-4d38-9720-46d211e795f6-kube-api-access-x8qqx\") pod \"openshift-apiserver-operator-796bbdcf4f-rcnrf\" (UID: \"b3d15d6e-b312-4d38-9720-46d211e795f6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rcnrf" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.390116 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e4b0569a-a2ad-445f-ba73-bec2508e5c0b-auth-proxy-config\") pod \"machine-approver-56656f9798-slnb9\" (UID: \"e4b0569a-a2ad-445f-ba73-bec2508e5c0b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-slnb9" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.390138 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5ec883a6-2265-4c56-97f1-98cd4a3aa084-etcd-serving-ca\") pod \"apiserver-76f77b778f-ml4rr\" (UID: \"5ec883a6-2265-4c56-97f1-98cd4a3aa084\") " pod="openshift-apiserver/apiserver-76f77b778f-ml4rr" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.390154 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5ec883a6-2265-4c56-97f1-98cd4a3aa084-image-import-ca\") pod \"apiserver-76f77b778f-ml4rr\" (UID: \"5ec883a6-2265-4c56-97f1-98cd4a3aa084\") " pod="openshift-apiserver/apiserver-76f77b778f-ml4rr" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.390175 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3d15d6e-b312-4d38-9720-46d211e795f6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-rcnrf\" (UID: \"b3d15d6e-b312-4d38-9720-46d211e795f6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rcnrf" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.390197 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4b0569a-a2ad-445f-ba73-bec2508e5c0b-config\") pod \"machine-approver-56656f9798-slnb9\" (UID: \"e4b0569a-a2ad-445f-ba73-bec2508e5c0b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-slnb9" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.390216 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efb2d24d-c4bc-4f9a-ae33-a7f3e6090a39-config\") pod \"machine-api-operator-5694c8668f-g46rv\" (UID: \"efb2d24d-c4bc-4f9a-ae33-a7f3e6090a39\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g46rv" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.390238 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdbrg\" (UniqueName: \"kubernetes.io/projected/fd50242e-74be-4e24-9e3c-121196f60867-kube-api-access-zdbrg\") pod \"controller-manager-879f6c89f-cjc2b\" (UID: \"fd50242e-74be-4e24-9e3c-121196f60867\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cjc2b" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.390259 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5ec883a6-2265-4c56-97f1-98cd4a3aa084-audit\") pod \"apiserver-76f77b778f-ml4rr\" (UID: \"5ec883a6-2265-4c56-97f1-98cd4a3aa084\") " pod="openshift-apiserver/apiserver-76f77b778f-ml4rr" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.390275 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ec883a6-2265-4c56-97f1-98cd4a3aa084-trusted-ca-bundle\") pod \"apiserver-76f77b778f-ml4rr\" (UID: \"5ec883a6-2265-4c56-97f1-98cd4a3aa084\") " pod="openshift-apiserver/apiserver-76f77b778f-ml4rr" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.390318 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-2sdgk"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.390602 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5ec883a6-2265-4c56-97f1-98cd4a3aa084-audit-dir\") pod \"apiserver-76f77b778f-ml4rr\" (UID: \"5ec883a6-2265-4c56-97f1-98cd4a3aa084\") " pod="openshift-apiserver/apiserver-76f77b778f-ml4rr" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.390718 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ec883a6-2265-4c56-97f1-98cd4a3aa084-config\") pod \"apiserver-76f77b778f-ml4rr\" (UID: \"5ec883a6-2265-4c56-97f1-98cd4a3aa084\") " pod="openshift-apiserver/apiserver-76f77b778f-ml4rr" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.391052 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd50242e-74be-4e24-9e3c-121196f60867-client-ca\") pod \"controller-manager-879f6c89f-cjc2b\" (UID: \"fd50242e-74be-4e24-9e3c-121196f60867\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cjc2b" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.391258 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-mxdtl"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.391729 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/efb2d24d-c4bc-4f9a-ae33-a7f3e6090a39-images\") pod \"machine-api-operator-5694c8668f-g46rv\" (UID: \"efb2d24d-c4bc-4f9a-ae33-a7f3e6090a39\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g46rv" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.392288 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cjc2b"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.392346 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4b0569a-a2ad-445f-ba73-bec2508e5c0b-config\") pod \"machine-approver-56656f9798-slnb9\" (UID: \"e4b0569a-a2ad-445f-ba73-bec2508e5c0b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-slnb9" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.392741 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efb2d24d-c4bc-4f9a-ae33-a7f3e6090a39-config\") pod \"machine-api-operator-5694c8668f-g46rv\" (UID: \"efb2d24d-c4bc-4f9a-ae33-a7f3e6090a39\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g46rv" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.392803 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fd50242e-74be-4e24-9e3c-121196f60867-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-cjc2b\" (UID: \"fd50242e-74be-4e24-9e3c-121196f60867\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cjc2b" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.392764 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ec883a6-2265-4c56-97f1-98cd4a3aa084-trusted-ca-bundle\") pod \"apiserver-76f77b778f-ml4rr\" (UID: \"5ec883a6-2265-4c56-97f1-98cd4a3aa084\") " pod="openshift-apiserver/apiserver-76f77b778f-ml4rr" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.393310 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5ec883a6-2265-4c56-97f1-98cd4a3aa084-audit\") pod \"apiserver-76f77b778f-ml4rr\" (UID: \"5ec883a6-2265-4c56-97f1-98cd4a3aa084\") " pod="openshift-apiserver/apiserver-76f77b778f-ml4rr" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.393333 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd50242e-74be-4e24-9e3c-121196f60867-config\") pod \"controller-manager-879f6c89f-cjc2b\" (UID: \"fd50242e-74be-4e24-9e3c-121196f60867\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cjc2b" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.393390 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e4b0569a-a2ad-445f-ba73-bec2508e5c0b-auth-proxy-config\") pod \"machine-approver-56656f9798-slnb9\" (UID: \"e4b0569a-a2ad-445f-ba73-bec2508e5c0b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-slnb9" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.393735 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3d15d6e-b312-4d38-9720-46d211e795f6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-rcnrf\" (UID: \"b3d15d6e-b312-4d38-9720-46d211e795f6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rcnrf" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.393790 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5ec883a6-2265-4c56-97f1-98cd4a3aa084-etcd-serving-ca\") pod \"apiserver-76f77b778f-ml4rr\" (UID: \"5ec883a6-2265-4c56-97f1-98cd4a3aa084\") " pod="openshift-apiserver/apiserver-76f77b778f-ml4rr" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.393809 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-xlgbx"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.394722 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-jx62d"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.394743 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5ec883a6-2265-4c56-97f1-98cd4a3aa084-image-import-ca\") pod \"apiserver-76f77b778f-ml4rr\" (UID: \"5ec883a6-2265-4c56-97f1-98cd4a3aa084\") " pod="openshift-apiserver/apiserver-76f77b778f-ml4rr" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.395307 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ec883a6-2265-4c56-97f1-98cd4a3aa084-serving-cert\") pod \"apiserver-76f77b778f-ml4rr\" (UID: \"5ec883a6-2265-4c56-97f1-98cd4a3aa084\") " pod="openshift-apiserver/apiserver-76f77b778f-ml4rr" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.395504 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-clqdc"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.395716 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/efb2d24d-c4bc-4f9a-ae33-a7f3e6090a39-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-g46rv\" (UID: \"efb2d24d-c4bc-4f9a-ae33-a7f3e6090a39\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g46rv" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.396345 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e4b0569a-a2ad-445f-ba73-bec2508e5c0b-machine-approver-tls\") pod \"machine-approver-56656f9798-slnb9\" (UID: \"e4b0569a-a2ad-445f-ba73-bec2508e5c0b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-slnb9" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.396456 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-q6zkl"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.396876 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3d15d6e-b312-4d38-9720-46d211e795f6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-rcnrf\" (UID: \"b3d15d6e-b312-4d38-9720-46d211e795f6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rcnrf" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.397174 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5ec883a6-2265-4c56-97f1-98cd4a3aa084-encryption-config\") pod \"apiserver-76f77b778f-ml4rr\" (UID: \"5ec883a6-2265-4c56-97f1-98cd4a3aa084\") " pod="openshift-apiserver/apiserver-76f77b778f-ml4rr" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.397382 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w46n4"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.398224 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xkbmm"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.399082 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-wzmz8"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.399983 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mc527"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.400495 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5ec883a6-2265-4c56-97f1-98cd4a3aa084-etcd-client\") pod \"apiserver-76f77b778f-ml4rr\" (UID: \"5ec883a6-2265-4c56-97f1-98cd4a3aa084\") " pod="openshift-apiserver/apiserver-76f77b778f-ml4rr" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.400819 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.401050 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-jnf5v"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.402021 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t44jd"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.403147 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8phbj"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.404179 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mvljn"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.405097 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-45bgn"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.406032 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dmhzm"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.406921 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q24qk"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.407845 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-k72td"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.408857 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-4txcp"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.409022 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-k72td" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.409667 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-4txcp" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.409726 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-k72td"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.410561 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-4txcp"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.420466 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.439145 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-rhcmp"] Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.441080 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-rhcmp" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.441399 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.460563 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.480482 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.501199 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.520343 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.541045 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.560224 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.581302 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.601260 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.622273 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.640477 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.660794 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.681256 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.700519 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.722522 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.741418 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.761505 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.781381 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.821389 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.841399 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.861270 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.881386 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.907158 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.920889 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.961699 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 15 05:39:29 crc kubenswrapper[4747]: I1215 05:39:29.981092 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.000791 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.021648 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.040388 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.061224 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.081029 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.100380 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.120907 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.140761 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.160738 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.180405 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.201211 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.220555 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.241154 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.260901 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.280628 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.301203 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.321255 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.339759 4747 request.go:700] Waited for 1.007446926s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dcollect-profiles-dockercfg-kzf4t&limit=500&resourceVersion=0 Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.340873 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.361898 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.380648 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 15 05:39:30 crc kubenswrapper[4747]: E1215 05:39:30.391839 4747 secret.go:188] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 15 05:39:30 crc kubenswrapper[4747]: E1215 05:39:30.391911 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fd50242e-74be-4e24-9e3c-121196f60867-serving-cert podName:fd50242e-74be-4e24-9e3c-121196f60867 nodeName:}" failed. No retries permitted until 2025-12-15 05:39:30.891889099 +0000 UTC m=+134.588401016 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/fd50242e-74be-4e24-9e3c-121196f60867-serving-cert") pod "controller-manager-879f6c89f-cjc2b" (UID: "fd50242e-74be-4e24-9e3c-121196f60867") : failed to sync secret cache: timed out waiting for the condition Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.400874 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.421449 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.441251 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.461398 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.481272 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.501514 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.520950 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.540795 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.561036 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.580944 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.601071 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.621440 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.640861 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.661230 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.680818 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.701151 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.720675 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.740894 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.760602 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.788042 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.801202 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.821587 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.841423 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.860984 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.881527 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.905914 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd50242e-74be-4e24-9e3c-121196f60867-serving-cert\") pod \"controller-manager-879f6c89f-cjc2b\" (UID: \"fd50242e-74be-4e24-9e3c-121196f60867\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cjc2b" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.907588 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.921224 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.941080 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.960707 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 15 05:39:30 crc kubenswrapper[4747]: I1215 05:39:30.980454 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.001419 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.020686 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.040871 4747 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.060943 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.081122 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.114448 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cx6d\" (UniqueName: \"kubernetes.io/projected/efb2d24d-c4bc-4f9a-ae33-a7f3e6090a39-kube-api-access-8cx6d\") pod \"machine-api-operator-5694c8668f-g46rv\" (UID: \"efb2d24d-c4bc-4f9a-ae33-a7f3e6090a39\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g46rv" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.133032 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8qqx\" (UniqueName: \"kubernetes.io/projected/b3d15d6e-b312-4d38-9720-46d211e795f6-kube-api-access-x8qqx\") pod \"openshift-apiserver-operator-796bbdcf4f-rcnrf\" (UID: \"b3d15d6e-b312-4d38-9720-46d211e795f6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rcnrf" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.153059 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drg45\" (UniqueName: \"kubernetes.io/projected/5ec883a6-2265-4c56-97f1-98cd4a3aa084-kube-api-access-drg45\") pod \"apiserver-76f77b778f-ml4rr\" (UID: \"5ec883a6-2265-4c56-97f1-98cd4a3aa084\") " pod="openshift-apiserver/apiserver-76f77b778f-ml4rr" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.192094 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdbrg\" (UniqueName: \"kubernetes.io/projected/fd50242e-74be-4e24-9e3c-121196f60867-kube-api-access-zdbrg\") pod \"controller-manager-879f6c89f-cjc2b\" (UID: \"fd50242e-74be-4e24-9e3c-121196f60867\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cjc2b" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.200797 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.220891 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.240600 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.261905 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.281132 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.301154 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.321170 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.340120 4747 request.go:700] Waited for 1.898823106s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-dockercfg-qx5rd&limit=500&resourceVersion=0 Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.341356 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.358221 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-ml4rr" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.361608 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.374282 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-g46rv" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.381575 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.386500 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rcnrf" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.409461 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/db7a7a97-4354-4b54-afbc-e47fb8751316-ca-trust-extracted\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.409494 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f9755e7f-72e0-4b8a-94c2-6702dec42d0b-default-certificate\") pod \"router-default-5444994796-gvbxq\" (UID: \"f9755e7f-72e0-4b8a-94c2-6702dec42d0b\") " pod="openshift-ingress/router-default-5444994796-gvbxq" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.409556 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d65z8\" (UniqueName: \"kubernetes.io/projected/db7a7a97-4354-4b54-afbc-e47fb8751316-kube-api-access-d65z8\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.409577 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q66vc\" (UniqueName: \"kubernetes.io/projected/b3604b48-6e56-4470-aa4b-c0d1956b42d0-kube-api-access-q66vc\") pod \"openshift-controller-manager-operator-756b6f6bc6-76d4n\" (UID: \"b3604b48-6e56-4470-aa4b-c0d1956b42d0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-76d4n" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.409598 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f9755e7f-72e0-4b8a-94c2-6702dec42d0b-metrics-certs\") pod \"router-default-5444994796-gvbxq\" (UID: \"f9755e7f-72e0-4b8a-94c2-6702dec42d0b\") " pod="openshift-ingress/router-default-5444994796-gvbxq" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.409623 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/db7a7a97-4354-4b54-afbc-e47fb8751316-registry-tls\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.409645 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f9755e7f-72e0-4b8a-94c2-6702dec42d0b-stats-auth\") pod \"router-default-5444994796-gvbxq\" (UID: \"f9755e7f-72e0-4b8a-94c2-6702dec42d0b\") " pod="openshift-ingress/router-default-5444994796-gvbxq" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.409742 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3604b48-6e56-4470-aa4b-c0d1956b42d0-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-76d4n\" (UID: \"b3604b48-6e56-4470-aa4b-c0d1956b42d0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-76d4n" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.409803 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d-client-ca\") pod \"route-controller-manager-6576b87f9c-bddlq\" (UID: \"8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bddlq" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.409828 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/feafc60a-2dff-433e-ad58-01dcc0f23974-trusted-ca\") pod \"console-operator-58897d9998-75dh6\" (UID: \"feafc60a-2dff-433e-ad58-01dcc0f23974\") " pod="openshift-console-operator/console-operator-58897d9998-75dh6" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.409916 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/db7a7a97-4354-4b54-afbc-e47fb8751316-installation-pull-secrets\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.409997 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/29341782-010b-4540-99a9-8cb20f667cef-etcd-ca\") pod \"etcd-operator-b45778765-wzmz8\" (UID: \"29341782-010b-4540-99a9-8cb20f667cef\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wzmz8" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.410019 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqwc4\" (UniqueName: \"kubernetes.io/projected/8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d-kube-api-access-gqwc4\") pod \"route-controller-manager-6576b87f9c-bddlq\" (UID: \"8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bddlq" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.410076 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-v472l\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-v472l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.410125 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7fm9\" (UniqueName: \"kubernetes.io/projected/fdff8a05-dbcd-4bb1-9b57-ec2c9bf02d0e-kube-api-access-n7fm9\") pod \"package-server-manager-789f6589d5-mpvdj\" (UID: \"fdff8a05-dbcd-4bb1-9b57-ec2c9bf02d0e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mpvdj" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.410179 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fd2dfc21-3dfb-470f-8417-b7f3d1c8d75b-metrics-tls\") pod \"dns-operator-744455d44c-pkbhr\" (UID: \"fd2dfc21-3dfb-470f-8417-b7f3d1c8d75b\") " pod="openshift-dns-operator/dns-operator-744455d44c-pkbhr" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.410216 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/57009fe6-55f5-42e6-8389-64796b3784c3-etcd-client\") pod \"apiserver-7bbb656c7d-6zbfg\" (UID: \"57009fe6-55f5-42e6-8389-64796b3784c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zbfg" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.410239 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29341782-010b-4540-99a9-8cb20f667cef-config\") pod \"etcd-operator-b45778765-wzmz8\" (UID: \"29341782-010b-4540-99a9-8cb20f667cef\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wzmz8" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.410477 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/caa99f47-6c6a-4642-b2eb-946507229c80-audit-policies\") pod \"oauth-openshift-558db77b4-v472l\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-v472l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.410499 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-v472l\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-v472l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.410521 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-v472l\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-v472l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.410539 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c3f3cf2-3751-4315-bcf9-f42a5650c32b-serving-cert\") pod \"openshift-config-operator-7777fb866f-gr6p7\" (UID: \"9c3f3cf2-3751-4315-bcf9-f42a5650c32b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gr6p7" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.410558 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/fdff8a05-dbcd-4bb1-9b57-ec2c9bf02d0e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-mpvdj\" (UID: \"fdff8a05-dbcd-4bb1-9b57-ec2c9bf02d0e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mpvdj" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.410605 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9cdc82fd-2b58-4bbf-8d67-5e66cf80ebb8-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-clqdc\" (UID: \"9cdc82fd-2b58-4bbf-8d67-5e66cf80ebb8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-clqdc" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.410657 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/db7a7a97-4354-4b54-afbc-e47fb8751316-registry-certificates\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.410676 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d-serving-cert\") pod \"route-controller-manager-6576b87f9c-bddlq\" (UID: \"8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bddlq" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.410716 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wttj\" (UniqueName: \"kubernetes.io/projected/b7e78311-fd22-49fa-a423-9037fc15aaa5-kube-api-access-8wttj\") pod \"authentication-operator-69f744f599-q6zkl\" (UID: \"b7e78311-fd22-49fa-a423-9037fc15aaa5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q6zkl" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.410734 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-v472l\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-v472l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.410751 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr2q6\" (UniqueName: \"kubernetes.io/projected/feafc60a-2dff-433e-ad58-01dcc0f23974-kube-api-access-sr2q6\") pod \"console-operator-58897d9998-75dh6\" (UID: \"feafc60a-2dff-433e-ad58-01dcc0f23974\") " pod="openshift-console-operator/console-operator-58897d9998-75dh6" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.410767 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7e78311-fd22-49fa-a423-9037fc15aaa5-config\") pod \"authentication-operator-69f744f599-q6zkl\" (UID: \"b7e78311-fd22-49fa-a423-9037fc15aaa5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q6zkl" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.410791 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/caa99f47-6c6a-4642-b2eb-946507229c80-audit-dir\") pod \"oauth-openshift-558db77b4-v472l\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-v472l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.410831 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-v472l\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-v472l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.410849 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9c3f3cf2-3751-4315-bcf9-f42a5650c32b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-gr6p7\" (UID: \"9c3f3cf2-3751-4315-bcf9-f42a5650c32b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gr6p7" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.410873 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-v472l\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-v472l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.410907 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qrfq\" (UniqueName: \"kubernetes.io/projected/17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f-kube-api-access-9qrfq\") pod \"console-f9d7485db-2sdgk\" (UID: \"17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f\") " pod="openshift-console/console-f9d7485db-2sdgk" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.410968 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2bf6\" (UniqueName: \"kubernetes.io/projected/fd2dfc21-3dfb-470f-8417-b7f3d1c8d75b-kube-api-access-r2bf6\") pod \"dns-operator-744455d44c-pkbhr\" (UID: \"fd2dfc21-3dfb-470f-8417-b7f3d1c8d75b\") " pod="openshift-dns-operator/dns-operator-744455d44c-pkbhr" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.411001 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9kll\" (UniqueName: \"kubernetes.io/projected/9c3f3cf2-3751-4315-bcf9-f42a5650c32b-kube-api-access-x9kll\") pod \"openshift-config-operator-7777fb866f-gr6p7\" (UID: \"9c3f3cf2-3751-4315-bcf9-f42a5650c32b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gr6p7" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.411353 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3604b48-6e56-4470-aa4b-c0d1956b42d0-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-76d4n\" (UID: \"b3604b48-6e56-4470-aa4b-c0d1956b42d0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-76d4n" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.411393 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/57009fe6-55f5-42e6-8389-64796b3784c3-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6zbfg\" (UID: \"57009fe6-55f5-42e6-8389-64796b3784c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zbfg" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.411426 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f-console-serving-cert\") pod \"console-f9d7485db-2sdgk\" (UID: \"17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f\") " pod="openshift-console/console-f9d7485db-2sdgk" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.411477 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57009fe6-55f5-42e6-8389-64796b3784c3-serving-cert\") pod \"apiserver-7bbb656c7d-6zbfg\" (UID: \"57009fe6-55f5-42e6-8389-64796b3784c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zbfg" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.411499 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-v472l\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-v472l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.411550 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9cdc82fd-2b58-4bbf-8d67-5e66cf80ebb8-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-clqdc\" (UID: \"9cdc82fd-2b58-4bbf-8d67-5e66cf80ebb8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-clqdc" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.411567 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f-trusted-ca-bundle\") pod \"console-f9d7485db-2sdgk\" (UID: \"17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f\") " pod="openshift-console/console-f9d7485db-2sdgk" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.411581 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7e78311-fd22-49fa-a423-9037fc15aaa5-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-q6zkl\" (UID: \"b7e78311-fd22-49fa-a423-9037fc15aaa5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q6zkl" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.411618 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/57009fe6-55f5-42e6-8389-64796b3784c3-audit-policies\") pod \"apiserver-7bbb656c7d-6zbfg\" (UID: \"57009fe6-55f5-42e6-8389-64796b3784c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zbfg" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.411647 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr4r2\" (UniqueName: \"kubernetes.io/projected/eb0ca9f3-9ee8-4299-adf4-5220bf190a0c-kube-api-access-wr4r2\") pod \"downloads-7954f5f757-5jmx6\" (UID: \"eb0ca9f3-9ee8-4299-adf4-5220bf190a0c\") " pod="openshift-console/downloads-7954f5f757-5jmx6" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.412267 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d-config\") pod \"route-controller-manager-6576b87f9c-bddlq\" (UID: \"8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bddlq" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.412318 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/feafc60a-2dff-433e-ad58-01dcc0f23974-config\") pod \"console-operator-58897d9998-75dh6\" (UID: \"feafc60a-2dff-433e-ad58-01dcc0f23974\") " pod="openshift-console-operator/console-operator-58897d9998-75dh6" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.412394 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-v472l\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-v472l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.412416 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cdc82fd-2b58-4bbf-8d67-5e66cf80ebb8-config\") pod \"kube-apiserver-operator-766d6c64bb-clqdc\" (UID: \"9cdc82fd-2b58-4bbf-8d67-5e66cf80ebb8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-clqdc" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.412468 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57009fe6-55f5-42e6-8389-64796b3784c3-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6zbfg\" (UID: \"57009fe6-55f5-42e6-8389-64796b3784c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zbfg" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.412513 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57009fe6-55f5-42e6-8389-64796b3784c3-audit-dir\") pod \"apiserver-7bbb656c7d-6zbfg\" (UID: \"57009fe6-55f5-42e6-8389-64796b3784c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zbfg" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.412543 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-v472l\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-v472l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.412697 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb6bd\" (UniqueName: \"kubernetes.io/projected/38505957-41ec-47b6-86a0-1b7c2a1c853e-kube-api-access-jb6bd\") pod \"migrator-59844c95c7-snk8n\" (UID: \"38505957-41ec-47b6-86a0-1b7c2a1c853e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-snk8n" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.412749 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7e78311-fd22-49fa-a423-9037fc15aaa5-serving-cert\") pod \"authentication-operator-69f744f599-q6zkl\" (UID: \"b7e78311-fd22-49fa-a423-9037fc15aaa5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q6zkl" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.412821 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-v472l\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-v472l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.412857 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7e78311-fd22-49fa-a423-9037fc15aaa5-service-ca-bundle\") pod \"authentication-operator-69f744f599-q6zkl\" (UID: \"b7e78311-fd22-49fa-a423-9037fc15aaa5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q6zkl" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.413261 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6bvf\" (UniqueName: \"kubernetes.io/projected/54c6f212-1947-47ff-a62a-dcc9b9559882-kube-api-access-j6bvf\") pod \"cluster-samples-operator-665b6dd947-w46n4\" (UID: \"54c6f212-1947-47ff-a62a-dcc9b9559882\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w46n4" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.413295 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgj52\" (UniqueName: \"kubernetes.io/projected/caa99f47-6c6a-4642-b2eb-946507229c80-kube-api-access-qgj52\") pod \"oauth-openshift-558db77b4-v472l\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-v472l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.413552 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chqws\" (UniqueName: \"kubernetes.io/projected/f9755e7f-72e0-4b8a-94c2-6702dec42d0b-kube-api-access-chqws\") pod \"router-default-5444994796-gvbxq\" (UID: \"f9755e7f-72e0-4b8a-94c2-6702dec42d0b\") " pod="openshift-ingress/router-default-5444994796-gvbxq" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.414402 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/54c6f212-1947-47ff-a62a-dcc9b9559882-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-w46n4\" (UID: \"54c6f212-1947-47ff-a62a-dcc9b9559882\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w46n4" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.414627 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-v472l\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-v472l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.414760 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r4qd\" (UniqueName: \"kubernetes.io/projected/57009fe6-55f5-42e6-8389-64796b3784c3-kube-api-access-2r4qd\") pod \"apiserver-7bbb656c7d-6zbfg\" (UID: \"57009fe6-55f5-42e6-8389-64796b3784c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zbfg" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.414783 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrxbh\" (UniqueName: \"kubernetes.io/projected/29341782-010b-4540-99a9-8cb20f667cef-kube-api-access-mrxbh\") pod \"etcd-operator-b45778765-wzmz8\" (UID: \"29341782-010b-4540-99a9-8cb20f667cef\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wzmz8" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.414799 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f-console-oauth-config\") pod \"console-f9d7485db-2sdgk\" (UID: \"17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f\") " pod="openshift-console/console-f9d7485db-2sdgk" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.414826 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/57009fe6-55f5-42e6-8389-64796b3784c3-encryption-config\") pod \"apiserver-7bbb656c7d-6zbfg\" (UID: \"57009fe6-55f5-42e6-8389-64796b3784c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zbfg" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.415004 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f-service-ca\") pod \"console-f9d7485db-2sdgk\" (UID: \"17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f\") " pod="openshift-console/console-f9d7485db-2sdgk" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.415047 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9755e7f-72e0-4b8a-94c2-6702dec42d0b-service-ca-bundle\") pod \"router-default-5444994796-gvbxq\" (UID: \"f9755e7f-72e0-4b8a-94c2-6702dec42d0b\") " pod="openshift-ingress/router-default-5444994796-gvbxq" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.415072 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.415089 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/db7a7a97-4354-4b54-afbc-e47fb8751316-bound-sa-token\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.415104 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/29341782-010b-4540-99a9-8cb20f667cef-etcd-service-ca\") pod \"etcd-operator-b45778765-wzmz8\" (UID: \"29341782-010b-4540-99a9-8cb20f667cef\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wzmz8" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.415123 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/29341782-010b-4540-99a9-8cb20f667cef-etcd-client\") pod \"etcd-operator-b45778765-wzmz8\" (UID: \"29341782-010b-4540-99a9-8cb20f667cef\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wzmz8" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.415137 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/feafc60a-2dff-433e-ad58-01dcc0f23974-serving-cert\") pod \"console-operator-58897d9998-75dh6\" (UID: \"feafc60a-2dff-433e-ad58-01dcc0f23974\") " pod="openshift-console-operator/console-operator-58897d9998-75dh6" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.415153 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f-console-config\") pod \"console-f9d7485db-2sdgk\" (UID: \"17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f\") " pod="openshift-console/console-f9d7485db-2sdgk" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.415166 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f-oauth-serving-cert\") pod \"console-f9d7485db-2sdgk\" (UID: \"17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f\") " pod="openshift-console/console-f9d7485db-2sdgk" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.415185 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db7a7a97-4354-4b54-afbc-e47fb8751316-trusted-ca\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.415206 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29341782-010b-4540-99a9-8cb20f667cef-serving-cert\") pod \"etcd-operator-b45778765-wzmz8\" (UID: \"29341782-010b-4540-99a9-8cb20f667cef\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wzmz8" Dec 15 05:39:31 crc kubenswrapper[4747]: E1215 05:39:31.415640 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-15 05:39:31.91561394 +0000 UTC m=+135.612125856 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lzg4l" (UID: "db7a7a97-4354-4b54-afbc-e47fb8751316") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.442511 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.453789 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v65b8\" (UniqueName: \"kubernetes.io/projected/e4b0569a-a2ad-445f-ba73-bec2508e5c0b-kube-api-access-v65b8\") pod \"machine-approver-56656f9798-slnb9\" (UID: \"e4b0569a-a2ad-445f-ba73-bec2508e5c0b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-slnb9" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.462455 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.468677 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd50242e-74be-4e24-9e3c-121196f60867-serving-cert\") pod \"controller-manager-879f6c89f-cjc2b\" (UID: \"fd50242e-74be-4e24-9e3c-121196f60867\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cjc2b" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.508481 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-g46rv"] Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.515616 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 15 05:39:31 crc kubenswrapper[4747]: E1215 05:39:31.516010 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-15 05:39:32.015959661 +0000 UTC m=+135.712471578 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.516087 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57009fe6-55f5-42e6-8389-64796b3784c3-serving-cert\") pod \"apiserver-7bbb656c7d-6zbfg\" (UID: \"57009fe6-55f5-42e6-8389-64796b3784c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zbfg" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.516193 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f-trusted-ca-bundle\") pod \"console-f9d7485db-2sdgk\" (UID: \"17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f\") " pod="openshift-console/console-f9d7485db-2sdgk" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.516225 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/57009fe6-55f5-42e6-8389-64796b3784c3-audit-policies\") pod \"apiserver-7bbb656c7d-6zbfg\" (UID: \"57009fe6-55f5-42e6-8389-64796b3784c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zbfg" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.516286 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9560d6f0-3fc0-483c-a3a7-87e022468221-proxy-tls\") pod \"machine-config-operator-74547568cd-mxdtl\" (UID: \"9560d6f0-3fc0-483c-a3a7-87e022468221\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mxdtl" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.516326 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr4r2\" (UniqueName: \"kubernetes.io/projected/eb0ca9f3-9ee8-4299-adf4-5220bf190a0c-kube-api-access-wr4r2\") pod \"downloads-7954f5f757-5jmx6\" (UID: \"eb0ca9f3-9ee8-4299-adf4-5220bf190a0c\") " pod="openshift-console/downloads-7954f5f757-5jmx6" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.516363 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/eba25f55-9f7e-43cc-a111-a5e4184c037e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-jx62d\" (UID: \"eba25f55-9f7e-43cc-a111-a5e4184c037e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jx62d" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.516393 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-v472l\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-v472l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.516420 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cdc82fd-2b58-4bbf-8d67-5e66cf80ebb8-config\") pod \"kube-apiserver-operator-766d6c64bb-clqdc\" (UID: \"9cdc82fd-2b58-4bbf-8d67-5e66cf80ebb8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-clqdc" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.516446 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp8ph\" (UniqueName: \"kubernetes.io/projected/fe39e570-d08d-473e-a9d8-4aedffae0f04-kube-api-access-mp8ph\") pod \"csi-hostpathplugin-8phbj\" (UID: \"fe39e570-d08d-473e-a9d8-4aedffae0f04\") " pod="hostpath-provisioner/csi-hostpathplugin-8phbj" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.516474 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57009fe6-55f5-42e6-8389-64796b3784c3-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6zbfg\" (UID: \"57009fe6-55f5-42e6-8389-64796b3784c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zbfg" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.516555 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69fcp\" (UniqueName: \"kubernetes.io/projected/3aed04d0-4166-4ed3-bf2b-39e9598d0160-kube-api-access-69fcp\") pod \"marketplace-operator-79b997595-mvljn\" (UID: \"3aed04d0-4166-4ed3-bf2b-39e9598d0160\") " pod="openshift-marketplace/marketplace-operator-79b997595-mvljn" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.516673 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/43660579-30f6-416b-b60a-db19d0f244f8-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-dmhzm\" (UID: \"43660579-30f6-416b-b60a-db19d0f244f8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dmhzm" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.516712 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/fe39e570-d08d-473e-a9d8-4aedffae0f04-plugins-dir\") pod \"csi-hostpathplugin-8phbj\" (UID: \"fe39e570-d08d-473e-a9d8-4aedffae0f04\") " pod="hostpath-provisioner/csi-hostpathplugin-8phbj" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.516756 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5skg\" (UniqueName: \"kubernetes.io/projected/ebb703f6-cf81-4ee4-9ec7-d58fa71e7f62-kube-api-access-m5skg\") pod \"collect-profiles-29429610-bgnz8\" (UID: \"ebb703f6-cf81-4ee4-9ec7-d58fa71e7f62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29429610-bgnz8" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.516894 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-v472l\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-v472l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.517023 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c8ebe95-b54a-4271-b6ad-a0d081bc93a7-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-czrvw\" (UID: \"9c8ebe95-b54a-4271-b6ad-a0d081bc93a7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-czrvw" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.517055 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb6bd\" (UniqueName: \"kubernetes.io/projected/38505957-41ec-47b6-86a0-1b7c2a1c853e-kube-api-access-jb6bd\") pod \"migrator-59844c95c7-snk8n\" (UID: \"38505957-41ec-47b6-86a0-1b7c2a1c853e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-snk8n" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.517062 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/57009fe6-55f5-42e6-8389-64796b3784c3-audit-policies\") pod \"apiserver-7bbb656c7d-6zbfg\" (UID: \"57009fe6-55f5-42e6-8389-64796b3784c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zbfg" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.517103 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7e78311-fd22-49fa-a423-9037fc15aaa5-serving-cert\") pod \"authentication-operator-69f744f599-q6zkl\" (UID: \"b7e78311-fd22-49fa-a423-9037fc15aaa5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q6zkl" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.517132 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fe39e570-d08d-473e-a9d8-4aedffae0f04-registration-dir\") pod \"csi-hostpathplugin-8phbj\" (UID: \"fe39e570-d08d-473e-a9d8-4aedffae0f04\") " pod="hostpath-provisioner/csi-hostpathplugin-8phbj" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.517153 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4dbb5b21-8479-4f73-b71d-2f2ab2a22b82-node-bootstrap-token\") pod \"machine-config-server-rhcmp\" (UID: \"4dbb5b21-8479-4f73-b71d-2f2ab2a22b82\") " pod="openshift-machine-config-operator/machine-config-server-rhcmp" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.517169 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cdc82fd-2b58-4bbf-8d67-5e66cf80ebb8-config\") pod \"kube-apiserver-operator-766d6c64bb-clqdc\" (UID: \"9cdc82fd-2b58-4bbf-8d67-5e66cf80ebb8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-clqdc" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.517202 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-v472l\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-v472l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.517230 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57009fe6-55f5-42e6-8389-64796b3784c3-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6zbfg\" (UID: \"57009fe6-55f5-42e6-8389-64796b3784c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zbfg" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.517267 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7e78311-fd22-49fa-a423-9037fc15aaa5-service-ca-bundle\") pod \"authentication-operator-69f744f599-q6zkl\" (UID: \"b7e78311-fd22-49fa-a423-9037fc15aaa5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q6zkl" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.517335 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgj52\" (UniqueName: \"kubernetes.io/projected/caa99f47-6c6a-4642-b2eb-946507229c80-kube-api-access-qgj52\") pod \"oauth-openshift-558db77b4-v472l\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-v472l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.517337 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-v472l\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-v472l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.517377 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chqws\" (UniqueName: \"kubernetes.io/projected/f9755e7f-72e0-4b8a-94c2-6702dec42d0b-kube-api-access-chqws\") pod \"router-default-5444994796-gvbxq\" (UID: \"f9755e7f-72e0-4b8a-94c2-6702dec42d0b\") " pod="openshift-ingress/router-default-5444994796-gvbxq" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.517483 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-v472l\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-v472l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.517530 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3286e37f-50f9-4120-af33-d9e09be31e37-config-volume\") pod \"dns-default-k72td\" (UID: \"3286e37f-50f9-4120-af33-d9e09be31e37\") " pod="openshift-dns/dns-default-k72td" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.517645 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4bbed563-3f20-42a1-949b-d5490500299b-cert\") pod \"ingress-canary-4txcp\" (UID: \"4bbed563-3f20-42a1-949b-d5490500299b\") " pod="openshift-ingress-canary/ingress-canary-4txcp" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.518117 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp9rm\" (UniqueName: \"kubernetes.io/projected/1335c7dc-dfe5-40d0-81b2-bc095c5a80c0-kube-api-access-xp9rm\") pod \"catalog-operator-68c6474976-mc527\" (UID: \"1335c7dc-dfe5-40d0-81b2-bc095c5a80c0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mc527" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.518154 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/57009fe6-55f5-42e6-8389-64796b3784c3-encryption-config\") pod \"apiserver-7bbb656c7d-6zbfg\" (UID: \"57009fe6-55f5-42e6-8389-64796b3784c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zbfg" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.518179 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrxbh\" (UniqueName: \"kubernetes.io/projected/29341782-010b-4540-99a9-8cb20f667cef-kube-api-access-mrxbh\") pod \"etcd-operator-b45778765-wzmz8\" (UID: \"29341782-010b-4540-99a9-8cb20f667cef\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wzmz8" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.518204 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f-console-oauth-config\") pod \"console-f9d7485db-2sdgk\" (UID: \"17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f\") " pod="openshift-console/console-f9d7485db-2sdgk" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.518226 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07bf5f61-69f9-4b0b-9f0d-f7c8e1b8379b-serving-cert\") pod \"service-ca-operator-777779d784-xlgbx\" (UID: \"07bf5f61-69f9-4b0b-9f0d-f7c8e1b8379b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xlgbx" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.518260 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3286e37f-50f9-4120-af33-d9e09be31e37-metrics-tls\") pod \"dns-default-k72td\" (UID: \"3286e37f-50f9-4120-af33-d9e09be31e37\") " pod="openshift-dns/dns-default-k72td" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.518283 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9c8ebe95-b54a-4271-b6ad-a0d081bc93a7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-czrvw\" (UID: \"9c8ebe95-b54a-4271-b6ad-a0d081bc93a7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-czrvw" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.518305 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/db7a7a97-4354-4b54-afbc-e47fb8751316-bound-sa-token\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.518325 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/29341782-010b-4540-99a9-8cb20f667cef-etcd-service-ca\") pod \"etcd-operator-b45778765-wzmz8\" (UID: \"29341782-010b-4540-99a9-8cb20f667cef\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wzmz8" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.518554 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/29341782-010b-4540-99a9-8cb20f667cef-etcd-client\") pod \"etcd-operator-b45778765-wzmz8\" (UID: \"29341782-010b-4540-99a9-8cb20f667cef\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wzmz8" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.518596 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/43660579-30f6-416b-b60a-db19d0f244f8-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-dmhzm\" (UID: \"43660579-30f6-416b-b60a-db19d0f244f8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dmhzm" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.518626 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1335c7dc-dfe5-40d0-81b2-bc095c5a80c0-profile-collector-cert\") pod \"catalog-operator-68c6474976-mc527\" (UID: \"1335c7dc-dfe5-40d0-81b2-bc095c5a80c0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mc527" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.518666 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d5tp\" (UniqueName: \"kubernetes.io/projected/07bf5f61-69f9-4b0b-9f0d-f7c8e1b8379b-kube-api-access-7d5tp\") pod \"service-ca-operator-777779d784-xlgbx\" (UID: \"07bf5f61-69f9-4b0b-9f0d-f7c8e1b8379b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xlgbx" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.518696 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db7a7a97-4354-4b54-afbc-e47fb8751316-trusted-ca\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.518718 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q66vc\" (UniqueName: \"kubernetes.io/projected/b3604b48-6e56-4470-aa4b-c0d1956b42d0-kube-api-access-q66vc\") pod \"openshift-controller-manager-operator-756b6f6bc6-76d4n\" (UID: \"b3604b48-6e56-4470-aa4b-c0d1956b42d0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-76d4n" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.518742 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/feafc60a-2dff-433e-ad58-01dcc0f23974-trusted-ca\") pod \"console-operator-58897d9998-75dh6\" (UID: \"feafc60a-2dff-433e-ad58-01dcc0f23974\") " pod="openshift-console-operator/console-operator-58897d9998-75dh6" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.518765 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3604b48-6e56-4470-aa4b-c0d1956b42d0-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-76d4n\" (UID: \"b3604b48-6e56-4470-aa4b-c0d1956b42d0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-76d4n" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.518788 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d-client-ca\") pod \"route-controller-manager-6576b87f9c-bddlq\" (UID: \"8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bddlq" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.518805 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/db7a7a97-4354-4b54-afbc-e47fb8751316-installation-pull-secrets\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.518827 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqwc4\" (UniqueName: \"kubernetes.io/projected/8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d-kube-api-access-gqwc4\") pod \"route-controller-manager-6576b87f9c-bddlq\" (UID: \"8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bddlq" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.518852 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/dea1e983-d109-4be5-b1e2-8de9d982dfb7-signing-key\") pod \"service-ca-9c57cc56f-45bgn\" (UID: \"dea1e983-d109-4be5-b1e2-8de9d982dfb7\") " pod="openshift-service-ca/service-ca-9c57cc56f-45bgn" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.518872 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fd2dfc21-3dfb-470f-8417-b7f3d1c8d75b-metrics-tls\") pod \"dns-operator-744455d44c-pkbhr\" (UID: \"fd2dfc21-3dfb-470f-8417-b7f3d1c8d75b\") " pod="openshift-dns-operator/dns-operator-744455d44c-pkbhr" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.518900 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29341782-010b-4540-99a9-8cb20f667cef-config\") pod \"etcd-operator-b45778765-wzmz8\" (UID: \"29341782-010b-4540-99a9-8cb20f667cef\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wzmz8" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.518918 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/caa99f47-6c6a-4642-b2eb-946507229c80-audit-policies\") pod \"oauth-openshift-558db77b4-v472l\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-v472l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.518955 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-v472l\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-v472l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.518982 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f139e81b-c534-4004-81b1-202a6b0e45f2-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xkbmm\" (UID: \"f139e81b-c534-4004-81b1-202a6b0e45f2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xkbmm" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.519014 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx8ks\" (UniqueName: \"kubernetes.io/projected/126d37e8-f81f-445a-bf48-49d228d42748-kube-api-access-tx8ks\") pod \"machine-config-controller-84d6567774-tpvkm\" (UID: \"126d37e8-f81f-445a-bf48-49d228d42748\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tpvkm" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.519038 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c3f3cf2-3751-4315-bcf9-f42a5650c32b-serving-cert\") pod \"openshift-config-operator-7777fb866f-gr6p7\" (UID: \"9c3f3cf2-3751-4315-bcf9-f42a5650c32b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gr6p7" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.519061 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/fdff8a05-dbcd-4bb1-9b57-ec2c9bf02d0e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-mpvdj\" (UID: \"fdff8a05-dbcd-4bb1-9b57-ec2c9bf02d0e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mpvdj" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.519084 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/fe39e570-d08d-473e-a9d8-4aedffae0f04-mountpoint-dir\") pod \"csi-hostpathplugin-8phbj\" (UID: \"fe39e570-d08d-473e-a9d8-4aedffae0f04\") " pod="hostpath-provisioner/csi-hostpathplugin-8phbj" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.519107 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wttj\" (UniqueName: \"kubernetes.io/projected/b7e78311-fd22-49fa-a423-9037fc15aaa5-kube-api-access-8wttj\") pod \"authentication-operator-69f744f599-q6zkl\" (UID: \"b7e78311-fd22-49fa-a423-9037fc15aaa5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q6zkl" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.519126 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7e22aff6-5dc0-454e-b980-d39cfcd08ba6-bound-sa-token\") pod \"ingress-operator-5b745b69d9-jnf5v\" (UID: \"7e22aff6-5dc0-454e-b980-d39cfcd08ba6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jnf5v" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.519148 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c8ebe95-b54a-4271-b6ad-a0d081bc93a7-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-czrvw\" (UID: \"9c8ebe95-b54a-4271-b6ad-a0d081bc93a7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-czrvw" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.519172 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ebb703f6-cf81-4ee4-9ec7-d58fa71e7f62-secret-volume\") pod \"collect-profiles-29429610-bgnz8\" (UID: \"ebb703f6-cf81-4ee4-9ec7-d58fa71e7f62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29429610-bgnz8" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.519196 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9c3f3cf2-3751-4315-bcf9-f42a5650c32b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-gr6p7\" (UID: \"9c3f3cf2-3751-4315-bcf9-f42a5650c32b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gr6p7" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.519218 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7e78311-fd22-49fa-a423-9037fc15aaa5-config\") pod \"authentication-operator-69f744f599-q6zkl\" (UID: \"b7e78311-fd22-49fa-a423-9037fc15aaa5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q6zkl" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.519239 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-v472l\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-v472l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.519259 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ebb703f6-cf81-4ee4-9ec7-d58fa71e7f62-config-volume\") pod \"collect-profiles-29429610-bgnz8\" (UID: \"ebb703f6-cf81-4ee4-9ec7-d58fa71e7f62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29429610-bgnz8" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.519286 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qrfq\" (UniqueName: \"kubernetes.io/projected/17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f-kube-api-access-9qrfq\") pod \"console-f9d7485db-2sdgk\" (UID: \"17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f\") " pod="openshift-console/console-f9d7485db-2sdgk" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.519308 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rx46\" (UniqueName: \"kubernetes.io/projected/4dbb5b21-8479-4f73-b71d-2f2ab2a22b82-kube-api-access-5rx46\") pod \"machine-config-server-rhcmp\" (UID: \"4dbb5b21-8479-4f73-b71d-2f2ab2a22b82\") " pod="openshift-machine-config-operator/machine-config-server-rhcmp" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.519329 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f-console-serving-cert\") pod \"console-f9d7485db-2sdgk\" (UID: \"17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f\") " pod="openshift-console/console-f9d7485db-2sdgk" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.519351 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxg28\" (UniqueName: \"kubernetes.io/projected/9560d6f0-3fc0-483c-a3a7-87e022468221-kube-api-access-sxg28\") pod \"machine-config-operator-74547568cd-mxdtl\" (UID: \"9560d6f0-3fc0-483c-a3a7-87e022468221\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mxdtl" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.519372 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z5c2\" (UniqueName: \"kubernetes.io/projected/dea1e983-d109-4be5-b1e2-8de9d982dfb7-kube-api-access-8z5c2\") pod \"service-ca-9c57cc56f-45bgn\" (UID: \"dea1e983-d109-4be5-b1e2-8de9d982dfb7\") " pod="openshift-service-ca/service-ca-9c57cc56f-45bgn" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.519379 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/29341782-010b-4540-99a9-8cb20f667cef-etcd-service-ca\") pod \"etcd-operator-b45778765-wzmz8\" (UID: \"29341782-010b-4540-99a9-8cb20f667cef\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wzmz8" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.519395 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3aed04d0-4166-4ed3-bf2b-39e9598d0160-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mvljn\" (UID: \"3aed04d0-4166-4ed3-bf2b-39e9598d0160\") " pod="openshift-marketplace/marketplace-operator-79b997595-mvljn" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.519420 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ac6e673-f966-4177-84a1-440b3989f4ab-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-7g65v\" (UID: \"4ac6e673-f966-4177-84a1-440b3989f4ab\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7g65v" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.519455 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3604b48-6e56-4470-aa4b-c0d1956b42d0-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-76d4n\" (UID: \"b3604b48-6e56-4470-aa4b-c0d1956b42d0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-76d4n" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.519474 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9cdc82fd-2b58-4bbf-8d67-5e66cf80ebb8-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-clqdc\" (UID: \"9cdc82fd-2b58-4bbf-8d67-5e66cf80ebb8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-clqdc" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.519491 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7e78311-fd22-49fa-a423-9037fc15aaa5-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-q6zkl\" (UID: \"b7e78311-fd22-49fa-a423-9037fc15aaa5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q6zkl" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.519514 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67k9m\" (UniqueName: \"kubernetes.io/projected/3286e37f-50f9-4120-af33-d9e09be31e37-kube-api-access-67k9m\") pod \"dns-default-k72td\" (UID: \"3286e37f-50f9-4120-af33-d9e09be31e37\") " pod="openshift-dns/dns-default-k72td" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.519534 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-v472l\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-v472l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.519550 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfpst\" (UniqueName: \"kubernetes.io/projected/eba25f55-9f7e-43cc-a111-a5e4184c037e-kube-api-access-sfpst\") pod \"multus-admission-controller-857f4d67dd-jx62d\" (UID: \"eba25f55-9f7e-43cc-a111-a5e4184c037e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jx62d" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.519578 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5c0f0f4c-b174-4082-a173-60b46a8d83fc-apiservice-cert\") pod \"packageserver-d55dfcdfc-t44jd\" (UID: \"5c0f0f4c-b174-4082-a173-60b46a8d83fc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t44jd" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.519593 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ac6e673-f966-4177-84a1-440b3989f4ab-config\") pod \"kube-controller-manager-operator-78b949d7b-7g65v\" (UID: \"4ac6e673-f966-4177-84a1-440b3989f4ab\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7g65v" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.519608 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07bf5f61-69f9-4b0b-9f0d-f7c8e1b8379b-config\") pod \"service-ca-operator-777779d784-xlgbx\" (UID: \"07bf5f61-69f9-4b0b-9f0d-f7c8e1b8379b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xlgbx" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.519639 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d-config\") pod \"route-controller-manager-6576b87f9c-bddlq\" (UID: \"8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bddlq" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.519658 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/feafc60a-2dff-433e-ad58-01dcc0f23974-config\") pod \"console-operator-58897d9998-75dh6\" (UID: \"feafc60a-2dff-433e-ad58-01dcc0f23974\") " pod="openshift-console-operator/console-operator-58897d9998-75dh6" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.519677 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8nm5\" (UniqueName: \"kubernetes.io/projected/4bbed563-3f20-42a1-949b-d5490500299b-kube-api-access-s8nm5\") pod \"ingress-canary-4txcp\" (UID: \"4bbed563-3f20-42a1-949b-d5490500299b\") " pod="openshift-ingress-canary/ingress-canary-4txcp" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.519699 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2hvr\" (UniqueName: \"kubernetes.io/projected/7e22aff6-5dc0-454e-b980-d39cfcd08ba6-kube-api-access-b2hvr\") pod \"ingress-operator-5b745b69d9-jnf5v\" (UID: \"7e22aff6-5dc0-454e-b980-d39cfcd08ba6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jnf5v" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.519717 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3aed04d0-4166-4ed3-bf2b-39e9598d0160-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mvljn\" (UID: \"3aed04d0-4166-4ed3-bf2b-39e9598d0160\") " pod="openshift-marketplace/marketplace-operator-79b997595-mvljn" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.519739 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/94faa019-bb1f-48da-a0e8-395e8a7d13b4-srv-cert\") pod \"olm-operator-6b444d44fb-q24qk\" (UID: \"94faa019-bb1f-48da-a0e8-395e8a7d13b4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q24qk" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.519762 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57009fe6-55f5-42e6-8389-64796b3784c3-audit-dir\") pod \"apiserver-7bbb656c7d-6zbfg\" (UID: \"57009fe6-55f5-42e6-8389-64796b3784c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zbfg" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.519782 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb4nz\" (UniqueName: \"kubernetes.io/projected/f139e81b-c534-4004-81b1-202a6b0e45f2-kube-api-access-lb4nz\") pod \"control-plane-machine-set-operator-78cbb6b69f-xkbmm\" (UID: \"f139e81b-c534-4004-81b1-202a6b0e45f2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xkbmm" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.519802 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a6b61cd-3536-4763-8fe5-0a49f5a360b5-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-kzkjc\" (UID: \"7a6b61cd-3536-4763-8fe5-0a49f5a360b5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzkjc" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.519823 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/54c6f212-1947-47ff-a62a-dcc9b9559882-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-w46n4\" (UID: \"54c6f212-1947-47ff-a62a-dcc9b9559882\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w46n4" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.519848 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6bvf\" (UniqueName: \"kubernetes.io/projected/54c6f212-1947-47ff-a62a-dcc9b9559882-kube-api-access-j6bvf\") pod \"cluster-samples-operator-665b6dd947-w46n4\" (UID: \"54c6f212-1947-47ff-a62a-dcc9b9559882\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w46n4" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.519865 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7e22aff6-5dc0-454e-b980-d39cfcd08ba6-trusted-ca\") pod \"ingress-operator-5b745b69d9-jnf5v\" (UID: \"7e22aff6-5dc0-454e-b980-d39cfcd08ba6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jnf5v" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.519882 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1335c7dc-dfe5-40d0-81b2-bc095c5a80c0-srv-cert\") pod \"catalog-operator-68c6474976-mc527\" (UID: \"1335c7dc-dfe5-40d0-81b2-bc095c5a80c0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mc527" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.519900 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2r4qd\" (UniqueName: \"kubernetes.io/projected/57009fe6-55f5-42e6-8389-64796b3784c3-kube-api-access-2r4qd\") pod \"apiserver-7bbb656c7d-6zbfg\" (UID: \"57009fe6-55f5-42e6-8389-64796b3784c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zbfg" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.519943 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fe39e570-d08d-473e-a9d8-4aedffae0f04-socket-dir\") pod \"csi-hostpathplugin-8phbj\" (UID: \"fe39e570-d08d-473e-a9d8-4aedffae0f04\") " pod="hostpath-provisioner/csi-hostpathplugin-8phbj" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.519969 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9755e7f-72e0-4b8a-94c2-6702dec42d0b-service-ca-bundle\") pod \"router-default-5444994796-gvbxq\" (UID: \"f9755e7f-72e0-4b8a-94c2-6702dec42d0b\") " pod="openshift-ingress/router-default-5444994796-gvbxq" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.519986 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f-service-ca\") pod \"console-f9d7485db-2sdgk\" (UID: \"17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f\") " pod="openshift-console/console-f9d7485db-2sdgk" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.520002 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/fe39e570-d08d-473e-a9d8-4aedffae0f04-csi-data-dir\") pod \"csi-hostpathplugin-8phbj\" (UID: \"fe39e570-d08d-473e-a9d8-4aedffae0f04\") " pod="hostpath-provisioner/csi-hostpathplugin-8phbj" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.520018 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5c0f0f4c-b174-4082-a173-60b46a8d83fc-tmpfs\") pod \"packageserver-d55dfcdfc-t44jd\" (UID: \"5c0f0f4c-b174-4082-a173-60b46a8d83fc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t44jd" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.520042 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.520061 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/feafc60a-2dff-433e-ad58-01dcc0f23974-serving-cert\") pod \"console-operator-58897d9998-75dh6\" (UID: \"feafc60a-2dff-433e-ad58-01dcc0f23974\") " pod="openshift-console-operator/console-operator-58897d9998-75dh6" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.520084 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a6b61cd-3536-4763-8fe5-0a49f5a360b5-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-kzkjc\" (UID: \"7a6b61cd-3536-4763-8fe5-0a49f5a360b5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzkjc" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.520102 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f-oauth-serving-cert\") pod \"console-f9d7485db-2sdgk\" (UID: \"17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f\") " pod="openshift-console/console-f9d7485db-2sdgk" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.520119 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7e22aff6-5dc0-454e-b980-d39cfcd08ba6-metrics-tls\") pod \"ingress-operator-5b745b69d9-jnf5v\" (UID: \"7e22aff6-5dc0-454e-b980-d39cfcd08ba6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jnf5v" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.520138 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29341782-010b-4540-99a9-8cb20f667cef-serving-cert\") pod \"etcd-operator-b45778765-wzmz8\" (UID: \"29341782-010b-4540-99a9-8cb20f667cef\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wzmz8" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.520156 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f-console-config\") pod \"console-f9d7485db-2sdgk\" (UID: \"17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f\") " pod="openshift-console/console-f9d7485db-2sdgk" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.520157 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9c3f3cf2-3751-4315-bcf9-f42a5650c32b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-gr6p7\" (UID: \"9c3f3cf2-3751-4315-bcf9-f42a5650c32b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gr6p7" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.520173 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9560d6f0-3fc0-483c-a3a7-87e022468221-auth-proxy-config\") pod \"machine-config-operator-74547568cd-mxdtl\" (UID: \"9560d6f0-3fc0-483c-a3a7-87e022468221\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mxdtl" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.520193 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v6cc\" (UniqueName: \"kubernetes.io/projected/94faa019-bb1f-48da-a0e8-395e8a7d13b4-kube-api-access-6v6cc\") pod \"olm-operator-6b444d44fb-q24qk\" (UID: \"94faa019-bb1f-48da-a0e8-395e8a7d13b4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q24qk" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.520214 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/db7a7a97-4354-4b54-afbc-e47fb8751316-ca-trust-extracted\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.520247 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f9755e7f-72e0-4b8a-94c2-6702dec42d0b-default-certificate\") pod \"router-default-5444994796-gvbxq\" (UID: \"f9755e7f-72e0-4b8a-94c2-6702dec42d0b\") " pod="openshift-ingress/router-default-5444994796-gvbxq" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.520390 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57009fe6-55f5-42e6-8389-64796b3784c3-serving-cert\") pod \"apiserver-7bbb656c7d-6zbfg\" (UID: \"57009fe6-55f5-42e6-8389-64796b3784c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zbfg" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.520716 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/db7a7a97-4354-4b54-afbc-e47fb8751316-ca-trust-extracted\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.521018 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7e78311-fd22-49fa-a423-9037fc15aaa5-config\") pod \"authentication-operator-69f744f599-q6zkl\" (UID: \"b7e78311-fd22-49fa-a423-9037fc15aaa5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q6zkl" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.521233 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9755e7f-72e0-4b8a-94c2-6702dec42d0b-service-ca-bundle\") pod \"router-default-5444994796-gvbxq\" (UID: \"f9755e7f-72e0-4b8a-94c2-6702dec42d0b\") " pod="openshift-ingress/router-default-5444994796-gvbxq" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.521798 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f-service-ca\") pod \"console-f9d7485db-2sdgk\" (UID: \"17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f\") " pod="openshift-console/console-f9d7485db-2sdgk" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.521835 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d65z8\" (UniqueName: \"kubernetes.io/projected/db7a7a97-4354-4b54-afbc-e47fb8751316-kube-api-access-d65z8\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.521861 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/db7a7a97-4354-4b54-afbc-e47fb8751316-registry-tls\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.521858 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d-config\") pod \"route-controller-manager-6576b87f9c-bddlq\" (UID: \"8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bddlq" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.521885 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f9755e7f-72e0-4b8a-94c2-6702dec42d0b-stats-auth\") pod \"router-default-5444994796-gvbxq\" (UID: \"f9755e7f-72e0-4b8a-94c2-6702dec42d0b\") " pod="openshift-ingress/router-default-5444994796-gvbxq" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.521906 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f9755e7f-72e0-4b8a-94c2-6702dec42d0b-metrics-certs\") pod \"router-default-5444994796-gvbxq\" (UID: \"f9755e7f-72e0-4b8a-94c2-6702dec42d0b\") " pod="openshift-ingress/router-default-5444994796-gvbxq" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.521943 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9560d6f0-3fc0-483c-a3a7-87e022468221-images\") pod \"machine-config-operator-74547568cd-mxdtl\" (UID: \"9560d6f0-3fc0-483c-a3a7-87e022468221\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mxdtl" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.521963 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/126d37e8-f81f-445a-bf48-49d228d42748-proxy-tls\") pod \"machine-config-controller-84d6567774-tpvkm\" (UID: \"126d37e8-f81f-445a-bf48-49d228d42748\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tpvkm" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.521985 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/43660579-30f6-416b-b60a-db19d0f244f8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-dmhzm\" (UID: \"43660579-30f6-416b-b60a-db19d0f244f8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dmhzm" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.522006 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/29341782-010b-4540-99a9-8cb20f667cef-etcd-ca\") pod \"etcd-operator-b45778765-wzmz8\" (UID: \"29341782-010b-4540-99a9-8cb20f667cef\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wzmz8" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.522027 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/4dbb5b21-8479-4f73-b71d-2f2ab2a22b82-certs\") pod \"machine-config-server-rhcmp\" (UID: \"4dbb5b21-8479-4f73-b71d-2f2ab2a22b82\") " pod="openshift-machine-config-operator/machine-config-server-rhcmp" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.522163 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-v472l\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-v472l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.522188 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7fm9\" (UniqueName: \"kubernetes.io/projected/fdff8a05-dbcd-4bb1-9b57-ec2c9bf02d0e-kube-api-access-n7fm9\") pod \"package-server-manager-789f6589d5-mpvdj\" (UID: \"fdff8a05-dbcd-4bb1-9b57-ec2c9bf02d0e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mpvdj" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.522209 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/57009fe6-55f5-42e6-8389-64796b3784c3-etcd-client\") pod \"apiserver-7bbb656c7d-6zbfg\" (UID: \"57009fe6-55f5-42e6-8389-64796b3784c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zbfg" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.522229 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ac6e673-f966-4177-84a1-440b3989f4ab-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-7g65v\" (UID: \"4ac6e673-f966-4177-84a1-440b3989f4ab\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7g65v" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.522252 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-v472l\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-v472l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.522271 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9cdc82fd-2b58-4bbf-8d67-5e66cf80ebb8-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-clqdc\" (UID: \"9cdc82fd-2b58-4bbf-8d67-5e66cf80ebb8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-clqdc" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.522302 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/db7a7a97-4354-4b54-afbc-e47fb8751316-registry-certificates\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.522319 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kthpk\" (UniqueName: \"kubernetes.io/projected/7a6b61cd-3536-4763-8fe5-0a49f5a360b5-kube-api-access-kthpk\") pod \"kube-storage-version-migrator-operator-b67b599dd-kzkjc\" (UID: \"7a6b61cd-3536-4763-8fe5-0a49f5a360b5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzkjc" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.522340 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d-serving-cert\") pod \"route-controller-manager-6576b87f9c-bddlq\" (UID: \"8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bddlq" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.522359 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-v472l\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-v472l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.522381 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sr2q6\" (UniqueName: \"kubernetes.io/projected/feafc60a-2dff-433e-ad58-01dcc0f23974-kube-api-access-sr2q6\") pod \"console-operator-58897d9998-75dh6\" (UID: \"feafc60a-2dff-433e-ad58-01dcc0f23974\") " pod="openshift-console-operator/console-operator-58897d9998-75dh6" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.522399 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/dea1e983-d109-4be5-b1e2-8de9d982dfb7-signing-cabundle\") pod \"service-ca-9c57cc56f-45bgn\" (UID: \"dea1e983-d109-4be5-b1e2-8de9d982dfb7\") " pod="openshift-service-ca/service-ca-9c57cc56f-45bgn" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.522418 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9mlh\" (UniqueName: \"kubernetes.io/projected/43660579-30f6-416b-b60a-db19d0f244f8-kube-api-access-j9mlh\") pod \"cluster-image-registry-operator-dc59b4c8b-dmhzm\" (UID: \"43660579-30f6-416b-b60a-db19d0f244f8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dmhzm" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.522440 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/caa99f47-6c6a-4642-b2eb-946507229c80-audit-dir\") pod \"oauth-openshift-558db77b4-v472l\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-v472l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.522460 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-v472l\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-v472l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.523077 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7e78311-fd22-49fa-a423-9037fc15aaa5-service-ca-bundle\") pod \"authentication-operator-69f744f599-q6zkl\" (UID: \"b7e78311-fd22-49fa-a423-9037fc15aaa5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q6zkl" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.523348 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f-trusted-ca-bundle\") pod \"console-f9d7485db-2sdgk\" (UID: \"17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f\") " pod="openshift-console/console-f9d7485db-2sdgk" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.523398 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-v472l\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-v472l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.523990 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f-oauth-serving-cert\") pod \"console-f9d7485db-2sdgk\" (UID: \"17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f\") " pod="openshift-console/console-f9d7485db-2sdgk" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.524198 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db7a7a97-4354-4b54-afbc-e47fb8751316-trusted-ca\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.524312 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/57009fe6-55f5-42e6-8389-64796b3784c3-encryption-config\") pod \"apiserver-7bbb656c7d-6zbfg\" (UID: \"57009fe6-55f5-42e6-8389-64796b3784c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zbfg" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.524716 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7e78311-fd22-49fa-a423-9037fc15aaa5-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-q6zkl\" (UID: \"b7e78311-fd22-49fa-a423-9037fc15aaa5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q6zkl" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.525080 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29341782-010b-4540-99a9-8cb20f667cef-config\") pod \"etcd-operator-b45778765-wzmz8\" (UID: \"29341782-010b-4540-99a9-8cb20f667cef\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wzmz8" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.525194 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-v472l\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-v472l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.525453 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/feafc60a-2dff-433e-ad58-01dcc0f23974-serving-cert\") pod \"console-operator-58897d9998-75dh6\" (UID: \"feafc60a-2dff-433e-ad58-01dcc0f23974\") " pod="openshift-console-operator/console-operator-58897d9998-75dh6" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.525488 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7e78311-fd22-49fa-a423-9037fc15aaa5-serving-cert\") pod \"authentication-operator-69f744f599-q6zkl\" (UID: \"b7e78311-fd22-49fa-a423-9037fc15aaa5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q6zkl" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.525670 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/caa99f47-6c6a-4642-b2eb-946507229c80-audit-policies\") pod \"oauth-openshift-558db77b4-v472l\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-v472l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.525941 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f-console-serving-cert\") pod \"console-f9d7485db-2sdgk\" (UID: \"17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f\") " pod="openshift-console/console-f9d7485db-2sdgk" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.526621 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/caa99f47-6c6a-4642-b2eb-946507229c80-audit-dir\") pod \"oauth-openshift-558db77b4-v472l\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-v472l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.526788 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5c0f0f4c-b174-4082-a173-60b46a8d83fc-webhook-cert\") pod \"packageserver-d55dfcdfc-t44jd\" (UID: \"5c0f0f4c-b174-4082-a173-60b46a8d83fc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t44jd" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.526831 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmj4f\" (UniqueName: \"kubernetes.io/projected/5c0f0f4c-b174-4082-a173-60b46a8d83fc-kube-api-access-pmj4f\") pod \"packageserver-d55dfcdfc-t44jd\" (UID: \"5c0f0f4c-b174-4082-a173-60b46a8d83fc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t44jd" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.526992 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/126d37e8-f81f-445a-bf48-49d228d42748-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tpvkm\" (UID: \"126d37e8-f81f-445a-bf48-49d228d42748\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tpvkm" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.527310 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d-client-ca\") pod \"route-controller-manager-6576b87f9c-bddlq\" (UID: \"8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bddlq" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.528120 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-v472l\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-v472l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.528167 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9kll\" (UniqueName: \"kubernetes.io/projected/9c3f3cf2-3751-4315-bcf9-f42a5650c32b-kube-api-access-x9kll\") pod \"openshift-config-operator-7777fb866f-gr6p7\" (UID: \"9c3f3cf2-3751-4315-bcf9-f42a5650c32b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gr6p7" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.528215 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2bf6\" (UniqueName: \"kubernetes.io/projected/fd2dfc21-3dfb-470f-8417-b7f3d1c8d75b-kube-api-access-r2bf6\") pod \"dns-operator-744455d44c-pkbhr\" (UID: \"fd2dfc21-3dfb-470f-8417-b7f3d1c8d75b\") " pod="openshift-dns-operator/dns-operator-744455d44c-pkbhr" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.528221 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f-console-config\") pod \"console-f9d7485db-2sdgk\" (UID: \"17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f\") " pod="openshift-console/console-f9d7485db-2sdgk" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.528251 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/feafc60a-2dff-433e-ad58-01dcc0f23974-trusted-ca\") pod \"console-operator-58897d9998-75dh6\" (UID: \"feafc60a-2dff-433e-ad58-01dcc0f23974\") " pod="openshift-console-operator/console-operator-58897d9998-75dh6" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.528317 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/57009fe6-55f5-42e6-8389-64796b3784c3-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6zbfg\" (UID: \"57009fe6-55f5-42e6-8389-64796b3784c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zbfg" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.528347 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/94faa019-bb1f-48da-a0e8-395e8a7d13b4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-q24qk\" (UID: \"94faa019-bb1f-48da-a0e8-395e8a7d13b4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q24qk" Dec 15 05:39:31 crc kubenswrapper[4747]: E1215 05:39:31.528575 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-15 05:39:32.028560222 +0000 UTC m=+135.725072139 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lzg4l" (UID: "db7a7a97-4354-4b54-afbc-e47fb8751316") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.528829 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/feafc60a-2dff-433e-ad58-01dcc0f23974-config\") pod \"console-operator-58897d9998-75dh6\" (UID: \"feafc60a-2dff-433e-ad58-01dcc0f23974\") " pod="openshift-console-operator/console-operator-58897d9998-75dh6" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.528959 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57009fe6-55f5-42e6-8389-64796b3784c3-audit-dir\") pod \"apiserver-7bbb656c7d-6zbfg\" (UID: \"57009fe6-55f5-42e6-8389-64796b3784c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zbfg" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.529476 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/29341782-010b-4540-99a9-8cb20f667cef-etcd-ca\") pod \"etcd-operator-b45778765-wzmz8\" (UID: \"29341782-010b-4540-99a9-8cb20f667cef\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wzmz8" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.529610 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3604b48-6e56-4470-aa4b-c0d1956b42d0-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-76d4n\" (UID: \"b3604b48-6e56-4470-aa4b-c0d1956b42d0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-76d4n" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.529715 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/57009fe6-55f5-42e6-8389-64796b3784c3-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6zbfg\" (UID: \"57009fe6-55f5-42e6-8389-64796b3784c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zbfg" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.529948 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f9755e7f-72e0-4b8a-94c2-6702dec42d0b-default-certificate\") pod \"router-default-5444994796-gvbxq\" (UID: \"f9755e7f-72e0-4b8a-94c2-6702dec42d0b\") " pod="openshift-ingress/router-default-5444994796-gvbxq" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.530155 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-v472l\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-v472l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.530569 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-v472l\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-v472l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.530733 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-v472l\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-v472l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.531236 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/db7a7a97-4354-4b54-afbc-e47fb8751316-registry-certificates\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.531861 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f-console-oauth-config\") pod \"console-f9d7485db-2sdgk\" (UID: \"17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f\") " pod="openshift-console/console-f9d7485db-2sdgk" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.531896 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/54c6f212-1947-47ff-a62a-dcc9b9559882-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-w46n4\" (UID: \"54c6f212-1947-47ff-a62a-dcc9b9559882\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w46n4" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.531937 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/29341782-010b-4540-99a9-8cb20f667cef-etcd-client\") pod \"etcd-operator-b45778765-wzmz8\" (UID: \"29341782-010b-4540-99a9-8cb20f667cef\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wzmz8" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.532198 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9cdc82fd-2b58-4bbf-8d67-5e66cf80ebb8-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-clqdc\" (UID: \"9cdc82fd-2b58-4bbf-8d67-5e66cf80ebb8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-clqdc" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.533089 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/db7a7a97-4354-4b54-afbc-e47fb8751316-installation-pull-secrets\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.533239 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fd2dfc21-3dfb-470f-8417-b7f3d1c8d75b-metrics-tls\") pod \"dns-operator-744455d44c-pkbhr\" (UID: \"fd2dfc21-3dfb-470f-8417-b7f3d1c8d75b\") " pod="openshift-dns-operator/dns-operator-744455d44c-pkbhr" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.533604 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3604b48-6e56-4470-aa4b-c0d1956b42d0-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-76d4n\" (UID: \"b3604b48-6e56-4470-aa4b-c0d1956b42d0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-76d4n" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.533661 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/57009fe6-55f5-42e6-8389-64796b3784c3-etcd-client\") pod \"apiserver-7bbb656c7d-6zbfg\" (UID: \"57009fe6-55f5-42e6-8389-64796b3784c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zbfg" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.533677 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-v472l\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-v472l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.533998 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29341782-010b-4540-99a9-8cb20f667cef-serving-cert\") pod \"etcd-operator-b45778765-wzmz8\" (UID: \"29341782-010b-4540-99a9-8cb20f667cef\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wzmz8" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.533997 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-v472l\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-v472l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.534026 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d-serving-cert\") pod \"route-controller-manager-6576b87f9c-bddlq\" (UID: \"8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bddlq" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.534448 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-v472l\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-v472l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.534720 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/db7a7a97-4354-4b54-afbc-e47fb8751316-registry-tls\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.535162 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f9755e7f-72e0-4b8a-94c2-6702dec42d0b-metrics-certs\") pod \"router-default-5444994796-gvbxq\" (UID: \"f9755e7f-72e0-4b8a-94c2-6702dec42d0b\") " pod="openshift-ingress/router-default-5444994796-gvbxq" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.535736 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f9755e7f-72e0-4b8a-94c2-6702dec42d0b-stats-auth\") pod \"router-default-5444994796-gvbxq\" (UID: \"f9755e7f-72e0-4b8a-94c2-6702dec42d0b\") " pod="openshift-ingress/router-default-5444994796-gvbxq" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.535892 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c3f3cf2-3751-4315-bcf9-f42a5650c32b-serving-cert\") pod \"openshift-config-operator-7777fb866f-gr6p7\" (UID: \"9c3f3cf2-3751-4315-bcf9-f42a5650c32b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gr6p7" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.536232 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-v472l\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-v472l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.536803 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/fdff8a05-dbcd-4bb1-9b57-ec2c9bf02d0e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-mpvdj\" (UID: \"fdff8a05-dbcd-4bb1-9b57-ec2c9bf02d0e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mpvdj" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.551554 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr4r2\" (UniqueName: \"kubernetes.io/projected/eb0ca9f3-9ee8-4299-adf4-5220bf190a0c-kube-api-access-wr4r2\") pod \"downloads-7954f5f757-5jmx6\" (UID: \"eb0ca9f3-9ee8-4299-adf4-5220bf190a0c\") " pod="openshift-console/downloads-7954f5f757-5jmx6" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.572092 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb6bd\" (UniqueName: \"kubernetes.io/projected/38505957-41ec-47b6-86a0-1b7c2a1c853e-kube-api-access-jb6bd\") pod \"migrator-59844c95c7-snk8n\" (UID: \"38505957-41ec-47b6-86a0-1b7c2a1c853e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-snk8n" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.593417 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chqws\" (UniqueName: \"kubernetes.io/projected/f9755e7f-72e0-4b8a-94c2-6702dec42d0b-kube-api-access-chqws\") pod \"router-default-5444994796-gvbxq\" (UID: \"f9755e7f-72e0-4b8a-94c2-6702dec42d0b\") " pod="openshift-ingress/router-default-5444994796-gvbxq" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.612765 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgj52\" (UniqueName: \"kubernetes.io/projected/caa99f47-6c6a-4642-b2eb-946507229c80-kube-api-access-qgj52\") pod \"oauth-openshift-558db77b4-v472l\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " pod="openshift-authentication/oauth-openshift-558db77b4-v472l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.629376 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.629558 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9560d6f0-3fc0-483c-a3a7-87e022468221-proxy-tls\") pod \"machine-config-operator-74547568cd-mxdtl\" (UID: \"9560d6f0-3fc0-483c-a3a7-87e022468221\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mxdtl" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.629582 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/eba25f55-9f7e-43cc-a111-a5e4184c037e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-jx62d\" (UID: \"eba25f55-9f7e-43cc-a111-a5e4184c037e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jx62d" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.629601 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp8ph\" (UniqueName: \"kubernetes.io/projected/fe39e570-d08d-473e-a9d8-4aedffae0f04-kube-api-access-mp8ph\") pod \"csi-hostpathplugin-8phbj\" (UID: \"fe39e570-d08d-473e-a9d8-4aedffae0f04\") " pod="hostpath-provisioner/csi-hostpathplugin-8phbj" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.629620 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69fcp\" (UniqueName: \"kubernetes.io/projected/3aed04d0-4166-4ed3-bf2b-39e9598d0160-kube-api-access-69fcp\") pod \"marketplace-operator-79b997595-mvljn\" (UID: \"3aed04d0-4166-4ed3-bf2b-39e9598d0160\") " pod="openshift-marketplace/marketplace-operator-79b997595-mvljn" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.629647 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/43660579-30f6-416b-b60a-db19d0f244f8-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-dmhzm\" (UID: \"43660579-30f6-416b-b60a-db19d0f244f8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dmhzm" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.629679 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/fe39e570-d08d-473e-a9d8-4aedffae0f04-plugins-dir\") pod \"csi-hostpathplugin-8phbj\" (UID: \"fe39e570-d08d-473e-a9d8-4aedffae0f04\") " pod="hostpath-provisioner/csi-hostpathplugin-8phbj" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.629695 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c8ebe95-b54a-4271-b6ad-a0d081bc93a7-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-czrvw\" (UID: \"9c8ebe95-b54a-4271-b6ad-a0d081bc93a7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-czrvw" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.629714 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5skg\" (UniqueName: \"kubernetes.io/projected/ebb703f6-cf81-4ee4-9ec7-d58fa71e7f62-kube-api-access-m5skg\") pod \"collect-profiles-29429610-bgnz8\" (UID: \"ebb703f6-cf81-4ee4-9ec7-d58fa71e7f62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29429610-bgnz8" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.629730 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fe39e570-d08d-473e-a9d8-4aedffae0f04-registration-dir\") pod \"csi-hostpathplugin-8phbj\" (UID: \"fe39e570-d08d-473e-a9d8-4aedffae0f04\") " pod="hostpath-provisioner/csi-hostpathplugin-8phbj" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.629746 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4dbb5b21-8479-4f73-b71d-2f2ab2a22b82-node-bootstrap-token\") pod \"machine-config-server-rhcmp\" (UID: \"4dbb5b21-8479-4f73-b71d-2f2ab2a22b82\") " pod="openshift-machine-config-operator/machine-config-server-rhcmp" Dec 15 05:39:31 crc kubenswrapper[4747]: E1215 05:39:31.629778 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-15 05:39:32.129756582 +0000 UTC m=+135.826268490 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.629807 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3286e37f-50f9-4120-af33-d9e09be31e37-config-volume\") pod \"dns-default-k72td\" (UID: \"3286e37f-50f9-4120-af33-d9e09be31e37\") " pod="openshift-dns/dns-default-k72td" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.629831 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4bbed563-3f20-42a1-949b-d5490500299b-cert\") pod \"ingress-canary-4txcp\" (UID: \"4bbed563-3f20-42a1-949b-d5490500299b\") " pod="openshift-ingress-canary/ingress-canary-4txcp" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.629852 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xp9rm\" (UniqueName: \"kubernetes.io/projected/1335c7dc-dfe5-40d0-81b2-bc095c5a80c0-kube-api-access-xp9rm\") pod \"catalog-operator-68c6474976-mc527\" (UID: \"1335c7dc-dfe5-40d0-81b2-bc095c5a80c0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mc527" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.629891 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07bf5f61-69f9-4b0b-9f0d-f7c8e1b8379b-serving-cert\") pod \"service-ca-operator-777779d784-xlgbx\" (UID: \"07bf5f61-69f9-4b0b-9f0d-f7c8e1b8379b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xlgbx" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.629911 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3286e37f-50f9-4120-af33-d9e09be31e37-metrics-tls\") pod \"dns-default-k72td\" (UID: \"3286e37f-50f9-4120-af33-d9e09be31e37\") " pod="openshift-dns/dns-default-k72td" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.629948 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9c8ebe95-b54a-4271-b6ad-a0d081bc93a7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-czrvw\" (UID: \"9c8ebe95-b54a-4271-b6ad-a0d081bc93a7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-czrvw" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.629971 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/43660579-30f6-416b-b60a-db19d0f244f8-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-dmhzm\" (UID: \"43660579-30f6-416b-b60a-db19d0f244f8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dmhzm" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.629986 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1335c7dc-dfe5-40d0-81b2-bc095c5a80c0-profile-collector-cert\") pod \"catalog-operator-68c6474976-mc527\" (UID: \"1335c7dc-dfe5-40d0-81b2-bc095c5a80c0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mc527" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.630002 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d5tp\" (UniqueName: \"kubernetes.io/projected/07bf5f61-69f9-4b0b-9f0d-f7c8e1b8379b-kube-api-access-7d5tp\") pod \"service-ca-operator-777779d784-xlgbx\" (UID: \"07bf5f61-69f9-4b0b-9f0d-f7c8e1b8379b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xlgbx" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.630036 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/dea1e983-d109-4be5-b1e2-8de9d982dfb7-signing-key\") pod \"service-ca-9c57cc56f-45bgn\" (UID: \"dea1e983-d109-4be5-b1e2-8de9d982dfb7\") " pod="openshift-service-ca/service-ca-9c57cc56f-45bgn" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.630057 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f139e81b-c534-4004-81b1-202a6b0e45f2-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xkbmm\" (UID: \"f139e81b-c534-4004-81b1-202a6b0e45f2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xkbmm" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.630077 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx8ks\" (UniqueName: \"kubernetes.io/projected/126d37e8-f81f-445a-bf48-49d228d42748-kube-api-access-tx8ks\") pod \"machine-config-controller-84d6567774-tpvkm\" (UID: \"126d37e8-f81f-445a-bf48-49d228d42748\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tpvkm" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.630098 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/fe39e570-d08d-473e-a9d8-4aedffae0f04-mountpoint-dir\") pod \"csi-hostpathplugin-8phbj\" (UID: \"fe39e570-d08d-473e-a9d8-4aedffae0f04\") " pod="hostpath-provisioner/csi-hostpathplugin-8phbj" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.630124 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7e22aff6-5dc0-454e-b980-d39cfcd08ba6-bound-sa-token\") pod \"ingress-operator-5b745b69d9-jnf5v\" (UID: \"7e22aff6-5dc0-454e-b980-d39cfcd08ba6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jnf5v" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.630142 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c8ebe95-b54a-4271-b6ad-a0d081bc93a7-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-czrvw\" (UID: \"9c8ebe95-b54a-4271-b6ad-a0d081bc93a7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-czrvw" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.630159 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ebb703f6-cf81-4ee4-9ec7-d58fa71e7f62-secret-volume\") pod \"collect-profiles-29429610-bgnz8\" (UID: \"ebb703f6-cf81-4ee4-9ec7-d58fa71e7f62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29429610-bgnz8" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.630175 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ebb703f6-cf81-4ee4-9ec7-d58fa71e7f62-config-volume\") pod \"collect-profiles-29429610-bgnz8\" (UID: \"ebb703f6-cf81-4ee4-9ec7-d58fa71e7f62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29429610-bgnz8" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.630198 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rx46\" (UniqueName: \"kubernetes.io/projected/4dbb5b21-8479-4f73-b71d-2f2ab2a22b82-kube-api-access-5rx46\") pod \"machine-config-server-rhcmp\" (UID: \"4dbb5b21-8479-4f73-b71d-2f2ab2a22b82\") " pod="openshift-machine-config-operator/machine-config-server-rhcmp" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.630225 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxg28\" (UniqueName: \"kubernetes.io/projected/9560d6f0-3fc0-483c-a3a7-87e022468221-kube-api-access-sxg28\") pod \"machine-config-operator-74547568cd-mxdtl\" (UID: \"9560d6f0-3fc0-483c-a3a7-87e022468221\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mxdtl" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.630241 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z5c2\" (UniqueName: \"kubernetes.io/projected/dea1e983-d109-4be5-b1e2-8de9d982dfb7-kube-api-access-8z5c2\") pod \"service-ca-9c57cc56f-45bgn\" (UID: \"dea1e983-d109-4be5-b1e2-8de9d982dfb7\") " pod="openshift-service-ca/service-ca-9c57cc56f-45bgn" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.630257 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3aed04d0-4166-4ed3-bf2b-39e9598d0160-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mvljn\" (UID: \"3aed04d0-4166-4ed3-bf2b-39e9598d0160\") " pod="openshift-marketplace/marketplace-operator-79b997595-mvljn" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.630277 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ac6e673-f966-4177-84a1-440b3989f4ab-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-7g65v\" (UID: \"4ac6e673-f966-4177-84a1-440b3989f4ab\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7g65v" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.630308 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67k9m\" (UniqueName: \"kubernetes.io/projected/3286e37f-50f9-4120-af33-d9e09be31e37-kube-api-access-67k9m\") pod \"dns-default-k72td\" (UID: \"3286e37f-50f9-4120-af33-d9e09be31e37\") " pod="openshift-dns/dns-default-k72td" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.630328 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfpst\" (UniqueName: \"kubernetes.io/projected/eba25f55-9f7e-43cc-a111-a5e4184c037e-kube-api-access-sfpst\") pod \"multus-admission-controller-857f4d67dd-jx62d\" (UID: \"eba25f55-9f7e-43cc-a111-a5e4184c037e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jx62d" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.630347 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5c0f0f4c-b174-4082-a173-60b46a8d83fc-apiservice-cert\") pod \"packageserver-d55dfcdfc-t44jd\" (UID: \"5c0f0f4c-b174-4082-a173-60b46a8d83fc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t44jd" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.630363 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ac6e673-f966-4177-84a1-440b3989f4ab-config\") pod \"kube-controller-manager-operator-78b949d7b-7g65v\" (UID: \"4ac6e673-f966-4177-84a1-440b3989f4ab\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7g65v" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.630378 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07bf5f61-69f9-4b0b-9f0d-f7c8e1b8379b-config\") pod \"service-ca-operator-777779d784-xlgbx\" (UID: \"07bf5f61-69f9-4b0b-9f0d-f7c8e1b8379b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xlgbx" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.630398 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2hvr\" (UniqueName: \"kubernetes.io/projected/7e22aff6-5dc0-454e-b980-d39cfcd08ba6-kube-api-access-b2hvr\") pod \"ingress-operator-5b745b69d9-jnf5v\" (UID: \"7e22aff6-5dc0-454e-b980-d39cfcd08ba6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jnf5v" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.630414 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8nm5\" (UniqueName: \"kubernetes.io/projected/4bbed563-3f20-42a1-949b-d5490500299b-kube-api-access-s8nm5\") pod \"ingress-canary-4txcp\" (UID: \"4bbed563-3f20-42a1-949b-d5490500299b\") " pod="openshift-ingress-canary/ingress-canary-4txcp" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.630432 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3aed04d0-4166-4ed3-bf2b-39e9598d0160-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mvljn\" (UID: \"3aed04d0-4166-4ed3-bf2b-39e9598d0160\") " pod="openshift-marketplace/marketplace-operator-79b997595-mvljn" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.630452 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/94faa019-bb1f-48da-a0e8-395e8a7d13b4-srv-cert\") pod \"olm-operator-6b444d44fb-q24qk\" (UID: \"94faa019-bb1f-48da-a0e8-395e8a7d13b4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q24qk" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.630477 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lb4nz\" (UniqueName: \"kubernetes.io/projected/f139e81b-c534-4004-81b1-202a6b0e45f2-kube-api-access-lb4nz\") pod \"control-plane-machine-set-operator-78cbb6b69f-xkbmm\" (UID: \"f139e81b-c534-4004-81b1-202a6b0e45f2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xkbmm" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.630493 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a6b61cd-3536-4763-8fe5-0a49f5a360b5-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-kzkjc\" (UID: \"7a6b61cd-3536-4763-8fe5-0a49f5a360b5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzkjc" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.630507 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7e22aff6-5dc0-454e-b980-d39cfcd08ba6-trusted-ca\") pod \"ingress-operator-5b745b69d9-jnf5v\" (UID: \"7e22aff6-5dc0-454e-b980-d39cfcd08ba6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jnf5v" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.630532 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fe39e570-d08d-473e-a9d8-4aedffae0f04-socket-dir\") pod \"csi-hostpathplugin-8phbj\" (UID: \"fe39e570-d08d-473e-a9d8-4aedffae0f04\") " pod="hostpath-provisioner/csi-hostpathplugin-8phbj" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.630546 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1335c7dc-dfe5-40d0-81b2-bc095c5a80c0-srv-cert\") pod \"catalog-operator-68c6474976-mc527\" (UID: \"1335c7dc-dfe5-40d0-81b2-bc095c5a80c0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mc527" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.630562 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5c0f0f4c-b174-4082-a173-60b46a8d83fc-tmpfs\") pod \"packageserver-d55dfcdfc-t44jd\" (UID: \"5c0f0f4c-b174-4082-a173-60b46a8d83fc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t44jd" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.630576 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/fe39e570-d08d-473e-a9d8-4aedffae0f04-csi-data-dir\") pod \"csi-hostpathplugin-8phbj\" (UID: \"fe39e570-d08d-473e-a9d8-4aedffae0f04\") " pod="hostpath-provisioner/csi-hostpathplugin-8phbj" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.630591 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a6b61cd-3536-4763-8fe5-0a49f5a360b5-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-kzkjc\" (UID: \"7a6b61cd-3536-4763-8fe5-0a49f5a360b5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzkjc" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.630620 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/43660579-30f6-416b-b60a-db19d0f244f8-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-dmhzm\" (UID: \"43660579-30f6-416b-b60a-db19d0f244f8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dmhzm" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.630640 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.630699 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7e22aff6-5dc0-454e-b980-d39cfcd08ba6-metrics-tls\") pod \"ingress-operator-5b745b69d9-jnf5v\" (UID: \"7e22aff6-5dc0-454e-b980-d39cfcd08ba6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jnf5v" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.630719 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9560d6f0-3fc0-483c-a3a7-87e022468221-auth-proxy-config\") pod \"machine-config-operator-74547568cd-mxdtl\" (UID: \"9560d6f0-3fc0-483c-a3a7-87e022468221\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mxdtl" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.630739 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v6cc\" (UniqueName: \"kubernetes.io/projected/94faa019-bb1f-48da-a0e8-395e8a7d13b4-kube-api-access-6v6cc\") pod \"olm-operator-6b444d44fb-q24qk\" (UID: \"94faa019-bb1f-48da-a0e8-395e8a7d13b4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q24qk" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.630765 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9560d6f0-3fc0-483c-a3a7-87e022468221-images\") pod \"machine-config-operator-74547568cd-mxdtl\" (UID: \"9560d6f0-3fc0-483c-a3a7-87e022468221\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mxdtl" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.630782 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/126d37e8-f81f-445a-bf48-49d228d42748-proxy-tls\") pod \"machine-config-controller-84d6567774-tpvkm\" (UID: \"126d37e8-f81f-445a-bf48-49d228d42748\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tpvkm" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.630799 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/43660579-30f6-416b-b60a-db19d0f244f8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-dmhzm\" (UID: \"43660579-30f6-416b-b60a-db19d0f244f8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dmhzm" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.630818 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/4dbb5b21-8479-4f73-b71d-2f2ab2a22b82-certs\") pod \"machine-config-server-rhcmp\" (UID: \"4dbb5b21-8479-4f73-b71d-2f2ab2a22b82\") " pod="openshift-machine-config-operator/machine-config-server-rhcmp" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.630842 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ac6e673-f966-4177-84a1-440b3989f4ab-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-7g65v\" (UID: \"4ac6e673-f966-4177-84a1-440b3989f4ab\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7g65v" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.630876 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kthpk\" (UniqueName: \"kubernetes.io/projected/7a6b61cd-3536-4763-8fe5-0a49f5a360b5-kube-api-access-kthpk\") pod \"kube-storage-version-migrator-operator-b67b599dd-kzkjc\" (UID: \"7a6b61cd-3536-4763-8fe5-0a49f5a360b5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzkjc" Dec 15 05:39:31 crc kubenswrapper[4747]: E1215 05:39:31.630887 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-15 05:39:32.130879083 +0000 UTC m=+135.827391000 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lzg4l" (UID: "db7a7a97-4354-4b54-afbc-e47fb8751316") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.630916 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/dea1e983-d109-4be5-b1e2-8de9d982dfb7-signing-cabundle\") pod \"service-ca-9c57cc56f-45bgn\" (UID: \"dea1e983-d109-4be5-b1e2-8de9d982dfb7\") " pod="openshift-service-ca/service-ca-9c57cc56f-45bgn" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.630970 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9mlh\" (UniqueName: \"kubernetes.io/projected/43660579-30f6-416b-b60a-db19d0f244f8-kube-api-access-j9mlh\") pod \"cluster-image-registry-operator-dc59b4c8b-dmhzm\" (UID: \"43660579-30f6-416b-b60a-db19d0f244f8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dmhzm" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.630991 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/126d37e8-f81f-445a-bf48-49d228d42748-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tpvkm\" (UID: \"126d37e8-f81f-445a-bf48-49d228d42748\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tpvkm" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.631010 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5c0f0f4c-b174-4082-a173-60b46a8d83fc-webhook-cert\") pod \"packageserver-d55dfcdfc-t44jd\" (UID: \"5c0f0f4c-b174-4082-a173-60b46a8d83fc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t44jd" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.631029 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmj4f\" (UniqueName: \"kubernetes.io/projected/5c0f0f4c-b174-4082-a173-60b46a8d83fc-kube-api-access-pmj4f\") pod \"packageserver-d55dfcdfc-t44jd\" (UID: \"5c0f0f4c-b174-4082-a173-60b46a8d83fc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t44jd" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.631058 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/94faa019-bb1f-48da-a0e8-395e8a7d13b4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-q24qk\" (UID: \"94faa019-bb1f-48da-a0e8-395e8a7d13b4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q24qk" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.631605 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cjc2b" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.632328 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4dbb5b21-8479-4f73-b71d-2f2ab2a22b82-node-bootstrap-token\") pod \"machine-config-server-rhcmp\" (UID: \"4dbb5b21-8479-4f73-b71d-2f2ab2a22b82\") " pod="openshift-machine-config-operator/machine-config-server-rhcmp" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.632519 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fe39e570-d08d-473e-a9d8-4aedffae0f04-registration-dir\") pod \"csi-hostpathplugin-8phbj\" (UID: \"fe39e570-d08d-473e-a9d8-4aedffae0f04\") " pod="hostpath-provisioner/csi-hostpathplugin-8phbj" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.632838 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/eba25f55-9f7e-43cc-a111-a5e4184c037e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-jx62d\" (UID: \"eba25f55-9f7e-43cc-a111-a5e4184c037e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jx62d" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.632876 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c8ebe95-b54a-4271-b6ad-a0d081bc93a7-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-czrvw\" (UID: \"9c8ebe95-b54a-4271-b6ad-a0d081bc93a7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-czrvw" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.632889 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9560d6f0-3fc0-483c-a3a7-87e022468221-auth-proxy-config\") pod \"machine-config-operator-74547568cd-mxdtl\" (UID: \"9560d6f0-3fc0-483c-a3a7-87e022468221\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mxdtl" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.633182 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/fe39e570-d08d-473e-a9d8-4aedffae0f04-plugins-dir\") pod \"csi-hostpathplugin-8phbj\" (UID: \"fe39e570-d08d-473e-a9d8-4aedffae0f04\") " pod="hostpath-provisioner/csi-hostpathplugin-8phbj" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.633269 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9560d6f0-3fc0-483c-a3a7-87e022468221-images\") pod \"machine-config-operator-74547568cd-mxdtl\" (UID: \"9560d6f0-3fc0-483c-a3a7-87e022468221\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mxdtl" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.633771 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3286e37f-50f9-4120-af33-d9e09be31e37-config-volume\") pod \"dns-default-k72td\" (UID: \"3286e37f-50f9-4120-af33-d9e09be31e37\") " pod="openshift-dns/dns-default-k72td" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.633875 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrxbh\" (UniqueName: \"kubernetes.io/projected/29341782-010b-4540-99a9-8cb20f667cef-kube-api-access-mrxbh\") pod \"etcd-operator-b45778765-wzmz8\" (UID: \"29341782-010b-4540-99a9-8cb20f667cef\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wzmz8" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.634006 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/fe39e570-d08d-473e-a9d8-4aedffae0f04-mountpoint-dir\") pod \"csi-hostpathplugin-8phbj\" (UID: \"fe39e570-d08d-473e-a9d8-4aedffae0f04\") " pod="hostpath-provisioner/csi-hostpathplugin-8phbj" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.634535 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ebb703f6-cf81-4ee4-9ec7-d58fa71e7f62-config-volume\") pod \"collect-profiles-29429610-bgnz8\" (UID: \"ebb703f6-cf81-4ee4-9ec7-d58fa71e7f62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29429610-bgnz8" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.634690 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9560d6f0-3fc0-483c-a3a7-87e022468221-proxy-tls\") pod \"machine-config-operator-74547568cd-mxdtl\" (UID: \"9560d6f0-3fc0-483c-a3a7-87e022468221\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mxdtl" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.635695 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c8ebe95-b54a-4271-b6ad-a0d081bc93a7-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-czrvw\" (UID: \"9c8ebe95-b54a-4271-b6ad-a0d081bc93a7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-czrvw" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.635776 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/4dbb5b21-8479-4f73-b71d-2f2ab2a22b82-certs\") pod \"machine-config-server-rhcmp\" (UID: \"4dbb5b21-8479-4f73-b71d-2f2ab2a22b82\") " pod="openshift-machine-config-operator/machine-config-server-rhcmp" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.636287 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/126d37e8-f81f-445a-bf48-49d228d42748-proxy-tls\") pod \"machine-config-controller-84d6567774-tpvkm\" (UID: \"126d37e8-f81f-445a-bf48-49d228d42748\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tpvkm" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.636409 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7e22aff6-5dc0-454e-b980-d39cfcd08ba6-metrics-tls\") pod \"ingress-operator-5b745b69d9-jnf5v\" (UID: \"7e22aff6-5dc0-454e-b980-d39cfcd08ba6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jnf5v" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.636755 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/43660579-30f6-416b-b60a-db19d0f244f8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-dmhzm\" (UID: \"43660579-30f6-416b-b60a-db19d0f244f8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dmhzm" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.636718 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3aed04d0-4166-4ed3-bf2b-39e9598d0160-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mvljn\" (UID: \"3aed04d0-4166-4ed3-bf2b-39e9598d0160\") " pod="openshift-marketplace/marketplace-operator-79b997595-mvljn" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.636816 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/fe39e570-d08d-473e-a9d8-4aedffae0f04-csi-data-dir\") pod \"csi-hostpathplugin-8phbj\" (UID: \"fe39e570-d08d-473e-a9d8-4aedffae0f04\") " pod="hostpath-provisioner/csi-hostpathplugin-8phbj" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.636972 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/dea1e983-d109-4be5-b1e2-8de9d982dfb7-signing-cabundle\") pod \"service-ca-9c57cc56f-45bgn\" (UID: \"dea1e983-d109-4be5-b1e2-8de9d982dfb7\") " pod="openshift-service-ca/service-ca-9c57cc56f-45bgn" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.637262 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5c0f0f4c-b174-4082-a173-60b46a8d83fc-apiservice-cert\") pod \"packageserver-d55dfcdfc-t44jd\" (UID: \"5c0f0f4c-b174-4082-a173-60b46a8d83fc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t44jd" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.637277 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5c0f0f4c-b174-4082-a173-60b46a8d83fc-tmpfs\") pod \"packageserver-d55dfcdfc-t44jd\" (UID: \"5c0f0f4c-b174-4082-a173-60b46a8d83fc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t44jd" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.637879 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07bf5f61-69f9-4b0b-9f0d-f7c8e1b8379b-config\") pod \"service-ca-operator-777779d784-xlgbx\" (UID: \"07bf5f61-69f9-4b0b-9f0d-f7c8e1b8379b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xlgbx" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.638056 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/126d37e8-f81f-445a-bf48-49d228d42748-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tpvkm\" (UID: \"126d37e8-f81f-445a-bf48-49d228d42748\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tpvkm" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.638134 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ac6e673-f966-4177-84a1-440b3989f4ab-config\") pod \"kube-controller-manager-operator-78b949d7b-7g65v\" (UID: \"4ac6e673-f966-4177-84a1-440b3989f4ab\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7g65v" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.638149 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a6b61cd-3536-4763-8fe5-0a49f5a360b5-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-kzkjc\" (UID: \"7a6b61cd-3536-4763-8fe5-0a49f5a360b5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzkjc" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.638231 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fe39e570-d08d-473e-a9d8-4aedffae0f04-socket-dir\") pod \"csi-hostpathplugin-8phbj\" (UID: \"fe39e570-d08d-473e-a9d8-4aedffae0f04\") " pod="hostpath-provisioner/csi-hostpathplugin-8phbj" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.638871 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07bf5f61-69f9-4b0b-9f0d-f7c8e1b8379b-serving-cert\") pod \"service-ca-operator-777779d784-xlgbx\" (UID: \"07bf5f61-69f9-4b0b-9f0d-f7c8e1b8379b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xlgbx" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.638909 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f139e81b-c534-4004-81b1-202a6b0e45f2-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xkbmm\" (UID: \"f139e81b-c534-4004-81b1-202a6b0e45f2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xkbmm" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.639189 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4bbed563-3f20-42a1-949b-d5490500299b-cert\") pod \"ingress-canary-4txcp\" (UID: \"4bbed563-3f20-42a1-949b-d5490500299b\") " pod="openshift-ingress-canary/ingress-canary-4txcp" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.639311 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/94faa019-bb1f-48da-a0e8-395e8a7d13b4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-q24qk\" (UID: \"94faa019-bb1f-48da-a0e8-395e8a7d13b4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q24qk" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.639660 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7e22aff6-5dc0-454e-b980-d39cfcd08ba6-trusted-ca\") pod \"ingress-operator-5b745b69d9-jnf5v\" (UID: \"7e22aff6-5dc0-454e-b980-d39cfcd08ba6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jnf5v" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.640225 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a6b61cd-3536-4763-8fe5-0a49f5a360b5-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-kzkjc\" (UID: \"7a6b61cd-3536-4763-8fe5-0a49f5a360b5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzkjc" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.640315 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1335c7dc-dfe5-40d0-81b2-bc095c5a80c0-srv-cert\") pod \"catalog-operator-68c6474976-mc527\" (UID: \"1335c7dc-dfe5-40d0-81b2-bc095c5a80c0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mc527" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.640820 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/dea1e983-d109-4be5-b1e2-8de9d982dfb7-signing-key\") pod \"service-ca-9c57cc56f-45bgn\" (UID: \"dea1e983-d109-4be5-b1e2-8de9d982dfb7\") " pod="openshift-service-ca/service-ca-9c57cc56f-45bgn" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.641087 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ac6e673-f966-4177-84a1-440b3989f4ab-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-7g65v\" (UID: \"4ac6e673-f966-4177-84a1-440b3989f4ab\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7g65v" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.641485 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3286e37f-50f9-4120-af33-d9e09be31e37-metrics-tls\") pod \"dns-default-k72td\" (UID: \"3286e37f-50f9-4120-af33-d9e09be31e37\") " pod="openshift-dns/dns-default-k72td" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.642447 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5c0f0f4c-b174-4082-a173-60b46a8d83fc-webhook-cert\") pod \"packageserver-d55dfcdfc-t44jd\" (UID: \"5c0f0f4c-b174-4082-a173-60b46a8d83fc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t44jd" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.642488 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3aed04d0-4166-4ed3-bf2b-39e9598d0160-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mvljn\" (UID: \"3aed04d0-4166-4ed3-bf2b-39e9598d0160\") " pod="openshift-marketplace/marketplace-operator-79b997595-mvljn" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.642965 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ebb703f6-cf81-4ee4-9ec7-d58fa71e7f62-secret-volume\") pod \"collect-profiles-29429610-bgnz8\" (UID: \"ebb703f6-cf81-4ee4-9ec7-d58fa71e7f62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29429610-bgnz8" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.643080 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/94faa019-bb1f-48da-a0e8-395e8a7d13b4-srv-cert\") pod \"olm-operator-6b444d44fb-q24qk\" (UID: \"94faa019-bb1f-48da-a0e8-395e8a7d13b4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q24qk" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.643540 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1335c7dc-dfe5-40d0-81b2-bc095c5a80c0-profile-collector-cert\") pod \"catalog-operator-68c6474976-mc527\" (UID: \"1335c7dc-dfe5-40d0-81b2-bc095c5a80c0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mc527" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.644018 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-slnb9" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.652465 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/db7a7a97-4354-4b54-afbc-e47fb8751316-bound-sa-token\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:31 crc kubenswrapper[4747]: W1215 05:39:31.654601 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4b0569a_a2ad_445f_ba73_bec2508e5c0b.slice/crio-67ddcc3881bd2e1a6e1187ae34d9a52438d2f3bf240baa23d651a29d4b1f88e5 WatchSource:0}: Error finding container 67ddcc3881bd2e1a6e1187ae34d9a52438d2f3bf240baa23d651a29d4b1f88e5: Status 404 returned error can't find the container with id 67ddcc3881bd2e1a6e1187ae34d9a52438d2f3bf240baa23d651a29d4b1f88e5 Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.673284 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qrfq\" (UniqueName: \"kubernetes.io/projected/17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f-kube-api-access-9qrfq\") pod \"console-f9d7485db-2sdgk\" (UID: \"17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f\") " pod="openshift-console/console-f9d7485db-2sdgk" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.693558 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6bvf\" (UniqueName: \"kubernetes.io/projected/54c6f212-1947-47ff-a62a-dcc9b9559882-kube-api-access-j6bvf\") pod \"cluster-samples-operator-665b6dd947-w46n4\" (UID: \"54c6f212-1947-47ff-a62a-dcc9b9559882\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w46n4" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.699060 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-5jmx6" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.712458 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqwc4\" (UniqueName: \"kubernetes.io/projected/8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d-kube-api-access-gqwc4\") pod \"route-controller-manager-6576b87f9c-bddlq\" (UID: \"8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bddlq" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.731917 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 15 05:39:31 crc kubenswrapper[4747]: E1215 05:39:31.732037 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-15 05:39:32.232013076 +0000 UTC m=+135.928524994 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.732725 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:31 crc kubenswrapper[4747]: E1215 05:39:31.733304 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-15 05:39:32.233290488 +0000 UTC m=+135.929802404 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lzg4l" (UID: "db7a7a97-4354-4b54-afbc-e47fb8751316") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.734987 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q66vc\" (UniqueName: \"kubernetes.io/projected/b3604b48-6e56-4470-aa4b-c0d1956b42d0-kube-api-access-q66vc\") pod \"openshift-controller-manager-operator-756b6f6bc6-76d4n\" (UID: \"b3604b48-6e56-4470-aa4b-c0d1956b42d0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-76d4n" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.738720 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-v472l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.750868 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rcnrf"] Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.754402 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-2sdgk" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.756207 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr2q6\" (UniqueName: \"kubernetes.io/projected/feafc60a-2dff-433e-ad58-01dcc0f23974-kube-api-access-sr2q6\") pod \"console-operator-58897d9998-75dh6\" (UID: \"feafc60a-2dff-433e-ad58-01dcc0f23974\") " pod="openshift-console-operator/console-operator-58897d9998-75dh6" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.764549 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-ml4rr"] Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.784707 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7fm9\" (UniqueName: \"kubernetes.io/projected/fdff8a05-dbcd-4bb1-9b57-ec2c9bf02d0e-kube-api-access-n7fm9\") pod \"package-server-manager-789f6589d5-mpvdj\" (UID: \"fdff8a05-dbcd-4bb1-9b57-ec2c9bf02d0e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mpvdj" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.810360 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9cdc82fd-2b58-4bbf-8d67-5e66cf80ebb8-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-clqdc\" (UID: \"9cdc82fd-2b58-4bbf-8d67-5e66cf80ebb8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-clqdc" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.812110 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cjc2b"] Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.812828 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bddlq" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.821107 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2r4qd\" (UniqueName: \"kubernetes.io/projected/57009fe6-55f5-42e6-8389-64796b3784c3-kube-api-access-2r4qd\") pod \"apiserver-7bbb656c7d-6zbfg\" (UID: \"57009fe6-55f5-42e6-8389-64796b3784c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zbfg" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.822523 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w46n4" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.827814 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-clqdc" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.839686 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 15 05:39:31 crc kubenswrapper[4747]: E1215 05:39:31.839774 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-15 05:39:32.339754603 +0000 UTC m=+136.036266520 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.840485 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.840548 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-gvbxq" Dec 15 05:39:31 crc kubenswrapper[4747]: E1215 05:39:31.840798 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-15 05:39:32.340783837 +0000 UTC m=+136.037295755 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lzg4l" (UID: "db7a7a97-4354-4b54-afbc-e47fb8751316") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.851341 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-snk8n" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.856218 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-wzmz8" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.860648 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d65z8\" (UniqueName: \"kubernetes.io/projected/db7a7a97-4354-4b54-afbc-e47fb8751316-kube-api-access-d65z8\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.862225 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mpvdj" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.872675 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2bf6\" (UniqueName: \"kubernetes.io/projected/fd2dfc21-3dfb-470f-8417-b7f3d1c8d75b-kube-api-access-r2bf6\") pod \"dns-operator-744455d44c-pkbhr\" (UID: \"fd2dfc21-3dfb-470f-8417-b7f3d1c8d75b\") " pod="openshift-dns-operator/dns-operator-744455d44c-pkbhr" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.894766 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9kll\" (UniqueName: \"kubernetes.io/projected/9c3f3cf2-3751-4315-bcf9-f42a5650c32b-kube-api-access-x9kll\") pod \"openshift-config-operator-7777fb866f-gr6p7\" (UID: \"9c3f3cf2-3751-4315-bcf9-f42a5650c32b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gr6p7" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.914431 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wttj\" (UniqueName: \"kubernetes.io/projected/b7e78311-fd22-49fa-a423-9037fc15aaa5-kube-api-access-8wttj\") pod \"authentication-operator-69f744f599-q6zkl\" (UID: \"b7e78311-fd22-49fa-a423-9037fc15aaa5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q6zkl" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.930915 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-5jmx6"] Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.940078 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kthpk\" (UniqueName: \"kubernetes.io/projected/7a6b61cd-3536-4763-8fe5-0a49f5a360b5-kube-api-access-kthpk\") pod \"kube-storage-version-migrator-operator-b67b599dd-kzkjc\" (UID: \"7a6b61cd-3536-4763-8fe5-0a49f5a360b5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzkjc" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.941038 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 15 05:39:31 crc kubenswrapper[4747]: E1215 05:39:31.941665 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-15 05:39:32.441647283 +0000 UTC m=+136.138159200 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.956278 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp8ph\" (UniqueName: \"kubernetes.io/projected/fe39e570-d08d-473e-a9d8-4aedffae0f04-kube-api-access-mp8ph\") pod \"csi-hostpathplugin-8phbj\" (UID: \"fe39e570-d08d-473e-a9d8-4aedffae0f04\") " pod="hostpath-provisioner/csi-hostpathplugin-8phbj" Dec 15 05:39:31 crc kubenswrapper[4747]: W1215 05:39:31.961952 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb0ca9f3_9ee8_4299_adf4_5220bf190a0c.slice/crio-3ced3a518a3a2b12290d85b91b01f974aa87944bdfcc9a6528196e447fd9c3ab WatchSource:0}: Error finding container 3ced3a518a3a2b12290d85b91b01f974aa87944bdfcc9a6528196e447fd9c3ab: Status 404 returned error can't find the container with id 3ced3a518a3a2b12290d85b91b01f974aa87944bdfcc9a6528196e447fd9c3ab Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.975300 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-8phbj" Dec 15 05:39:31 crc kubenswrapper[4747]: I1215 05:39:31.980764 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69fcp\" (UniqueName: \"kubernetes.io/projected/3aed04d0-4166-4ed3-bf2b-39e9598d0160-kube-api-access-69fcp\") pod \"marketplace-operator-79b997595-mvljn\" (UID: \"3aed04d0-4166-4ed3-bf2b-39e9598d0160\") " pod="openshift-marketplace/marketplace-operator-79b997595-mvljn" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.006565 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ac6e673-f966-4177-84a1-440b3989f4ab-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-7g65v\" (UID: \"4ac6e673-f966-4177-84a1-440b3989f4ab\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7g65v" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.012607 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5skg\" (UniqueName: \"kubernetes.io/projected/ebb703f6-cf81-4ee4-9ec7-d58fa71e7f62-kube-api-access-m5skg\") pod \"collect-profiles-29429610-bgnz8\" (UID: \"ebb703f6-cf81-4ee4-9ec7-d58fa71e7f62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29429610-bgnz8" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.029304 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-76d4n" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.035293 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v6cc\" (UniqueName: \"kubernetes.io/projected/94faa019-bb1f-48da-a0e8-395e8a7d13b4-kube-api-access-6v6cc\") pod \"olm-operator-6b444d44fb-q24qk\" (UID: \"94faa019-bb1f-48da-a0e8-395e8a7d13b4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q24qk" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.042798 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:32 crc kubenswrapper[4747]: E1215 05:39:32.043947 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-15 05:39:32.543915649 +0000 UTC m=+136.240427566 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lzg4l" (UID: "db7a7a97-4354-4b54-afbc-e47fb8751316") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.048966 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-75dh6" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.056874 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67k9m\" (UniqueName: \"kubernetes.io/projected/3286e37f-50f9-4120-af33-d9e09be31e37-kube-api-access-67k9m\") pod \"dns-default-k72td\" (UID: \"3286e37f-50f9-4120-af33-d9e09be31e37\") " pod="openshift-dns/dns-default-k72td" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.070340 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zbfg" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.073617 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-q6zkl" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.076707 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx8ks\" (UniqueName: \"kubernetes.io/projected/126d37e8-f81f-445a-bf48-49d228d42748-kube-api-access-tx8ks\") pod \"machine-config-controller-84d6567774-tpvkm\" (UID: \"126d37e8-f81f-445a-bf48-49d228d42748\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tpvkm" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.095121 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfpst\" (UniqueName: \"kubernetes.io/projected/eba25f55-9f7e-43cc-a111-a5e4184c037e-kube-api-access-sfpst\") pod \"multus-admission-controller-857f4d67dd-jx62d\" (UID: \"eba25f55-9f7e-43cc-a111-a5e4184c037e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jx62d" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.095131 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-pkbhr" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.135917 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gr6p7" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.137979 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rx46\" (UniqueName: \"kubernetes.io/projected/4dbb5b21-8479-4f73-b71d-2f2ab2a22b82-kube-api-access-5rx46\") pod \"machine-config-server-rhcmp\" (UID: \"4dbb5b21-8479-4f73-b71d-2f2ab2a22b82\") " pod="openshift-machine-config-operator/machine-config-server-rhcmp" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.148297 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-clqdc"] Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.152753 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxg28\" (UniqueName: \"kubernetes.io/projected/9560d6f0-3fc0-483c-a3a7-87e022468221-kube-api-access-sxg28\") pod \"machine-config-operator-74547568cd-mxdtl\" (UID: \"9560d6f0-3fc0-483c-a3a7-87e022468221\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mxdtl" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.157133 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 15 05:39:32 crc kubenswrapper[4747]: E1215 05:39:32.157584 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-15 05:39:32.657567097 +0000 UTC m=+136.354079014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.164837 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7e22aff6-5dc0-454e-b980-d39cfcd08ba6-bound-sa-token\") pod \"ingress-operator-5b745b69d9-jnf5v\" (UID: \"7e22aff6-5dc0-454e-b980-d39cfcd08ba6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jnf5v" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.168186 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tpvkm" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.176889 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-2sdgk"] Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.177029 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7g65v" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.183763 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z5c2\" (UniqueName: \"kubernetes.io/projected/dea1e983-d109-4be5-b1e2-8de9d982dfb7-kube-api-access-8z5c2\") pod \"service-ca-9c57cc56f-45bgn\" (UID: \"dea1e983-d109-4be5-b1e2-8de9d982dfb7\") " pod="openshift-service-ca/service-ca-9c57cc56f-45bgn" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.198180 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29429610-bgnz8" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.199455 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzkjc" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.199845 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-v472l"] Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.204311 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-snk8n"] Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.210122 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/43660579-30f6-416b-b60a-db19d0f244f8-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-dmhzm\" (UID: \"43660579-30f6-416b-b60a-db19d0f244f8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dmhzm" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.211088 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mxdtl" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.217489 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-jx62d" Dec 15 05:39:32 crc kubenswrapper[4747]: W1215 05:39:32.219901 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9cdc82fd_2b58_4bbf_8d67_5e66cf80ebb8.slice/crio-92fa4f75e7e5e8406bfe15eeafea2b6390fb46af90c18c259a5dd008873d6f95 WatchSource:0}: Error finding container 92fa4f75e7e5e8406bfe15eeafea2b6390fb46af90c18c259a5dd008873d6f95: Status 404 returned error can't find the container with id 92fa4f75e7e5e8406bfe15eeafea2b6390fb46af90c18c259a5dd008873d6f95 Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.222779 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-45bgn" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.223520 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xp9rm\" (UniqueName: \"kubernetes.io/projected/1335c7dc-dfe5-40d0-81b2-bc095c5a80c0-kube-api-access-xp9rm\") pod \"catalog-operator-68c6474976-mc527\" (UID: \"1335c7dc-dfe5-40d0-81b2-bc095c5a80c0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mc527" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.226075 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-slnb9" event={"ID":"e4b0569a-a2ad-445f-ba73-bec2508e5c0b","Type":"ContainerStarted","Data":"c573fd53e7d7a6fe53fd1faf14c25b45263b8690232fef6a95500a265148d632"} Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.226107 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-slnb9" event={"ID":"e4b0569a-a2ad-445f-ba73-bec2508e5c0b","Type":"ContainerStarted","Data":"67ddcc3881bd2e1a6e1187ae34d9a52438d2f3bf240baa23d651a29d4b1f88e5"} Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.233645 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q24qk" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.233708 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mvljn" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.242546 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mc527" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.242734 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d5tp\" (UniqueName: \"kubernetes.io/projected/07bf5f61-69f9-4b0b-9f0d-f7c8e1b8379b-kube-api-access-7d5tp\") pod \"service-ca-operator-777779d784-xlgbx\" (UID: \"07bf5f61-69f9-4b0b-9f0d-f7c8e1b8379b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xlgbx" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.244871 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mpvdj"] Dec 15 05:39:32 crc kubenswrapper[4747]: W1215 05:39:32.251098 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcaa99f47_6c6a_4642_b2eb_946507229c80.slice/crio-9d3f95fe8a8f6cdf026d7b270b802ff1506cd7694c102b2044aed9587a7e09ba WatchSource:0}: Error finding container 9d3f95fe8a8f6cdf026d7b270b802ff1506cd7694c102b2044aed9587a7e09ba: Status 404 returned error can't find the container with id 9d3f95fe8a8f6cdf026d7b270b802ff1506cd7694c102b2044aed9587a7e09ba Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.256865 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9c8ebe95-b54a-4271-b6ad-a0d081bc93a7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-czrvw\" (UID: \"9c8ebe95-b54a-4271-b6ad-a0d081bc93a7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-czrvw" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.258543 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:32 crc kubenswrapper[4747]: E1215 05:39:32.258942 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-15 05:39:32.758913981 +0000 UTC m=+136.455425898 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lzg4l" (UID: "db7a7a97-4354-4b54-afbc-e47fb8751316") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.266769 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-g46rv" event={"ID":"efb2d24d-c4bc-4f9a-ae33-a7f3e6090a39","Type":"ContainerStarted","Data":"ad841bff832fa74c45665653bd4e97541d20dda36298c68141221b56bff0f6f8"} Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.266804 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-g46rv" event={"ID":"efb2d24d-c4bc-4f9a-ae33-a7f3e6090a39","Type":"ContainerStarted","Data":"4628cd7873170064a54b013395245d764c376406c2a9fc643c5b953050eac5bc"} Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.266815 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-g46rv" event={"ID":"efb2d24d-c4bc-4f9a-ae33-a7f3e6090a39","Type":"ContainerStarted","Data":"1a5a6f85e5b2145da63e12bde2adf2e3766ea335d4f54a3b8d411c3c07e7352b"} Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.269800 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rcnrf" event={"ID":"b3d15d6e-b312-4d38-9720-46d211e795f6","Type":"ContainerStarted","Data":"f23d0903e2ab079251913b2f79a9adb21d6e6432fc2f84bbf8ba4df84fa6eaed"} Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.269848 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rcnrf" event={"ID":"b3d15d6e-b312-4d38-9720-46d211e795f6","Type":"ContainerStarted","Data":"0fd373e40d3ec7f1a06aea17405d1f0f77dd50c6a24f2cb4952b80a6ba331a90"} Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.284552 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-k72td" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.286298 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9mlh\" (UniqueName: \"kubernetes.io/projected/43660579-30f6-416b-b60a-db19d0f244f8-kube-api-access-j9mlh\") pod \"cluster-image-registry-operator-dc59b4c8b-dmhzm\" (UID: \"43660579-30f6-416b-b60a-db19d0f244f8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dmhzm" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.287753 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-rhcmp" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.292181 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-wzmz8"] Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.293159 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8phbj"] Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.294544 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-gvbxq" event={"ID":"f9755e7f-72e0-4b8a-94c2-6702dec42d0b","Type":"ContainerStarted","Data":"e677b99cf25b1476f69fa026207ffc88d58c9824a1efe5c511d15ac57e939a56"} Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.300877 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cjc2b" event={"ID":"fd50242e-74be-4e24-9e3c-121196f60867","Type":"ContainerStarted","Data":"831a7d6cd9e1da123696edc9a56beceaa4eda8cd2203f9880f12abfae91b67f5"} Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.300997 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cjc2b" event={"ID":"fd50242e-74be-4e24-9e3c-121196f60867","Type":"ContainerStarted","Data":"49f089e258d68c8ec9c82ebd29c0184878bd3b6e57ff0682c3469f16157a6d26"} Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.301827 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-cjc2b" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.302917 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-ml4rr" event={"ID":"5ec883a6-2265-4c56-97f1-98cd4a3aa084","Type":"ContainerStarted","Data":"a43c8603cf8ab53e00d362e6d8c64302be090a4eca71dff676180bd2249d2990"} Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.306837 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8nm5\" (UniqueName: \"kubernetes.io/projected/4bbed563-3f20-42a1-949b-d5490500299b-kube-api-access-s8nm5\") pod \"ingress-canary-4txcp\" (UID: \"4bbed563-3f20-42a1-949b-d5490500299b\") " pod="openshift-ingress-canary/ingress-canary-4txcp" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.308880 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-5jmx6" event={"ID":"eb0ca9f3-9ee8-4299-adf4-5220bf190a0c","Type":"ContainerStarted","Data":"3ced3a518a3a2b12290d85b91b01f974aa87944bdfcc9a6528196e447fd9c3ab"} Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.309523 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-5jmx6" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.314403 4747 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-cjc2b container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.314454 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-cjc2b" podUID="fd50242e-74be-4e24-9e3c-121196f60867" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.317491 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2hvr\" (UniqueName: \"kubernetes.io/projected/7e22aff6-5dc0-454e-b980-d39cfcd08ba6-kube-api-access-b2hvr\") pod \"ingress-operator-5b745b69d9-jnf5v\" (UID: \"7e22aff6-5dc0-454e-b980-d39cfcd08ba6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jnf5v" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.320990 4747 patch_prober.go:28] interesting pod/downloads-7954f5f757-5jmx6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.321072 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5jmx6" podUID="eb0ca9f3-9ee8-4299-adf4-5220bf190a0c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.325994 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w46n4"] Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.339370 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bddlq"] Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.350731 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lb4nz\" (UniqueName: \"kubernetes.io/projected/f139e81b-c534-4004-81b1-202a6b0e45f2-kube-api-access-lb4nz\") pod \"control-plane-machine-set-operator-78cbb6b69f-xkbmm\" (UID: \"f139e81b-c534-4004-81b1-202a6b0e45f2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xkbmm" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.353645 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmj4f\" (UniqueName: \"kubernetes.io/projected/5c0f0f4c-b174-4082-a173-60b46a8d83fc-kube-api-access-pmj4f\") pod \"packageserver-d55dfcdfc-t44jd\" (UID: \"5c0f0f4c-b174-4082-a173-60b46a8d83fc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t44jd" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.359362 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 15 05:39:32 crc kubenswrapper[4747]: E1215 05:39:32.359784 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-15 05:39:32.859757078 +0000 UTC m=+136.556268994 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.361589 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jnf5v" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.376452 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6zbfg"] Dec 15 05:39:32 crc kubenswrapper[4747]: W1215 05:39:32.423357 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c412dfd_4cd4_47d5_ac92_59e7b59d6a2d.slice/crio-e7949581146ae66ff2adef154a63b9f3d81d8794aafb96baedfa4e832d4e4ba3 WatchSource:0}: Error finding container e7949581146ae66ff2adef154a63b9f3d81d8794aafb96baedfa4e832d4e4ba3: Status 404 returned error can't find the container with id e7949581146ae66ff2adef154a63b9f3d81d8794aafb96baedfa4e832d4e4ba3 Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.461814 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:32 crc kubenswrapper[4747]: E1215 05:39:32.462164 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-15 05:39:32.96215112 +0000 UTC m=+136.658663037 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lzg4l" (UID: "db7a7a97-4354-4b54-afbc-e47fb8751316") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.479654 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xkbmm" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.500080 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xlgbx" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.505782 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-czrvw" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.546432 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t44jd" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.551940 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dmhzm" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.563072 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 15 05:39:32 crc kubenswrapper[4747]: E1215 05:39:32.564009 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-15 05:39:33.063993405 +0000 UTC m=+136.760505322 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.588938 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-4txcp" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.611730 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-gr6p7"] Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.617532 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mvljn"] Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.667614 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:32 crc kubenswrapper[4747]: E1215 05:39:32.680489 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-15 05:39:33.180460346 +0000 UTC m=+136.876972253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lzg4l" (UID: "db7a7a97-4354-4b54-afbc-e47fb8751316") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.711461 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-pkbhr"] Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.711497 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzkjc"] Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.711515 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29429610-bgnz8"] Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.720401 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-75dh6"] Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.736352 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-76d4n"] Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.740207 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-q6zkl"] Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.757566 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7g65v"] Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.768945 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 15 05:39:32 crc kubenswrapper[4747]: E1215 05:39:32.769266 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-15 05:39:33.269236415 +0000 UTC m=+136.965748333 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.769338 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:32 crc kubenswrapper[4747]: E1215 05:39:32.769655 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-15 05:39:33.269640466 +0000 UTC m=+136.966152382 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lzg4l" (UID: "db7a7a97-4354-4b54-afbc-e47fb8751316") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.779800 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tpvkm"] Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.841071 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-gvbxq" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.851035 4747 patch_prober.go:28] interesting pod/router-default-5444994796-gvbxq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 15 05:39:32 crc kubenswrapper[4747]: [-]has-synced failed: reason withheld Dec 15 05:39:32 crc kubenswrapper[4747]: [+]process-running ok Dec 15 05:39:32 crc kubenswrapper[4747]: healthz check failed Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.851074 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gvbxq" podUID="f9755e7f-72e0-4b8a-94c2-6702dec42d0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.870645 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 15 05:39:32 crc kubenswrapper[4747]: E1215 05:39:32.870979 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-15 05:39:33.370948516 +0000 UTC m=+137.067460423 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:32 crc kubenswrapper[4747]: W1215 05:39:32.895032 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3604b48_6e56_4470_aa4b_c0d1956b42d0.slice/crio-708bbade59ff97b052a6815e87c79ca8b9c9861435e810fd5085e9e193aa0844 WatchSource:0}: Error finding container 708bbade59ff97b052a6815e87c79ca8b9c9861435e810fd5085e9e193aa0844: Status 404 returned error can't find the container with id 708bbade59ff97b052a6815e87c79ca8b9c9861435e810fd5085e9e193aa0844 Dec 15 05:39:32 crc kubenswrapper[4747]: W1215 05:39:32.897472 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod126d37e8_f81f_445a_bf48_49d228d42748.slice/crio-4c802bae805209c556fe8dc06a6e3777e49affb7edfdeef57e123aaa24b0df62 WatchSource:0}: Error finding container 4c802bae805209c556fe8dc06a6e3777e49affb7edfdeef57e123aaa24b0df62: Status 404 returned error can't find the container with id 4c802bae805209c556fe8dc06a6e3777e49affb7edfdeef57e123aaa24b0df62 Dec 15 05:39:32 crc kubenswrapper[4747]: W1215 05:39:32.924042 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7e78311_fd22_49fa_a423_9037fc15aaa5.slice/crio-d4c0db6e5ed77f9410c4a799c82ba0f185d87d56789e6f357f13a3e47d56f08c WatchSource:0}: Error finding container d4c0db6e5ed77f9410c4a799c82ba0f185d87d56789e6f357f13a3e47d56f08c: Status 404 returned error can't find the container with id d4c0db6e5ed77f9410c4a799c82ba0f185d87d56789e6f357f13a3e47d56f08c Dec 15 05:39:32 crc kubenswrapper[4747]: W1215 05:39:32.930026 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ac6e673_f966_4177_84a1_440b3989f4ab.slice/crio-192e5ec233a228aa69f2e51afcfcb932a518d2705a2a5c952a4029996b29cebe WatchSource:0}: Error finding container 192e5ec233a228aa69f2e51afcfcb932a518d2705a2a5c952a4029996b29cebe: Status 404 returned error can't find the container with id 192e5ec233a228aa69f2e51afcfcb932a518d2705a2a5c952a4029996b29cebe Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.977353 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:32 crc kubenswrapper[4747]: E1215 05:39:32.977685 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-15 05:39:33.477669664 +0000 UTC m=+137.174181580 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lzg4l" (UID: "db7a7a97-4354-4b54-afbc-e47fb8751316") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:32 crc kubenswrapper[4747]: I1215 05:39:32.981512 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-jx62d"] Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.077843 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 15 05:39:33 crc kubenswrapper[4747]: E1215 05:39:33.078014 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-15 05:39:33.577986191 +0000 UTC m=+137.274498107 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.078969 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:33 crc kubenswrapper[4747]: E1215 05:39:33.079413 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-15 05:39:33.579401251 +0000 UTC m=+137.275913168 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lzg4l" (UID: "db7a7a97-4354-4b54-afbc-e47fb8751316") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.156780 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-k72td"] Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.179711 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mc527"] Dec 15 05:39:33 crc kubenswrapper[4747]: E1215 05:39:33.180162 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-15 05:39:33.680140673 +0000 UTC m=+137.376652590 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.190883 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-45bgn"] Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.195061 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.195527 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:33 crc kubenswrapper[4747]: E1215 05:39:33.196221 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-15 05:39:33.696206659 +0000 UTC m=+137.392718566 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lzg4l" (UID: "db7a7a97-4354-4b54-afbc-e47fb8751316") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.218033 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xkbmm"] Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.248729 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q24qk"] Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.262868 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-mxdtl"] Dec 15 05:39:33 crc kubenswrapper[4747]: W1215 05:39:33.290971 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3286e37f_50f9_4120_af33_d9e09be31e37.slice/crio-6c033dc8d0ce4904d97eade3c7fcc3b50c7d73c6b915f89f4df2057700f7e0e8 WatchSource:0}: Error finding container 6c033dc8d0ce4904d97eade3c7fcc3b50c7d73c6b915f89f4df2057700f7e0e8: Status 404 returned error can't find the container with id 6c033dc8d0ce4904d97eade3c7fcc3b50c7d73c6b915f89f4df2057700f7e0e8 Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.326057 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 15 05:39:33 crc kubenswrapper[4747]: E1215 05:39:33.326575 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-15 05:39:33.826561781 +0000 UTC m=+137.523073698 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.353582 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-75dh6" event={"ID":"feafc60a-2dff-433e-ad58-01dcc0f23974","Type":"ContainerStarted","Data":"73568a5975f1335dc40b164f2739f5aa86575fea1bcf64107acffceecf8009d4"} Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.373772 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-gvbxq" event={"ID":"f9755e7f-72e0-4b8a-94c2-6702dec42d0b","Type":"ContainerStarted","Data":"97ac7e070e953a383a210deaa8a57b13a95f310584c115299cb39d2d44585d80"} Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.389579 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7g65v" event={"ID":"4ac6e673-f966-4177-84a1-440b3989f4ab","Type":"ContainerStarted","Data":"192e5ec233a228aa69f2e51afcfcb932a518d2705a2a5c952a4029996b29cebe"} Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.390869 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-clqdc" event={"ID":"9cdc82fd-2b58-4bbf-8d67-5e66cf80ebb8","Type":"ContainerStarted","Data":"92fa4f75e7e5e8406bfe15eeafea2b6390fb46af90c18c259a5dd008873d6f95"} Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.405465 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-jnf5v"] Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.406458 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gr6p7" event={"ID":"9c3f3cf2-3751-4315-bcf9-f42a5650c32b","Type":"ContainerStarted","Data":"551a156122f33e6903d242ab2f46b3fe12d78aa8f10a54c06fef236d9a8d97e8"} Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.409348 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mpvdj" event={"ID":"fdff8a05-dbcd-4bb1-9b57-ec2c9bf02d0e","Type":"ContainerStarted","Data":"712805861f68eaa5fb148eb57e043659112aecbbed71b67380e5f2385f6ada46"} Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.409401 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mpvdj" event={"ID":"fdff8a05-dbcd-4bb1-9b57-ec2c9bf02d0e","Type":"ContainerStarted","Data":"0bc6ed87a402d76763664fb420d0660d71da5efa313c5831ca63ae62b0b826f5"} Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.410768 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29429610-bgnz8" event={"ID":"ebb703f6-cf81-4ee4-9ec7-d58fa71e7f62","Type":"ContainerStarted","Data":"e905748daf0e80829c719015ed7f8aa2280ede98989749dfe040e5bd60f28af5"} Dec 15 05:39:33 crc kubenswrapper[4747]: W1215 05:39:33.425263 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9560d6f0_3fc0_483c_a3a7_87e022468221.slice/crio-9bf803912dd6adc4fec1d07d09b9077ac9d15597fef156d13501096d3840e810 WatchSource:0}: Error finding container 9bf803912dd6adc4fec1d07d09b9077ac9d15597fef156d13501096d3840e810: Status 404 returned error can't find the container with id 9bf803912dd6adc4fec1d07d09b9077ac9d15597fef156d13501096d3840e810 Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.446703 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:33 crc kubenswrapper[4747]: E1215 05:39:33.447214 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-15 05:39:33.947198242 +0000 UTC m=+137.643710149 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lzg4l" (UID: "db7a7a97-4354-4b54-afbc-e47fb8751316") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.464062 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-v472l" event={"ID":"caa99f47-6c6a-4642-b2eb-946507229c80","Type":"ContainerStarted","Data":"427bce72ad2a9bd08f903a76cc07c1dcbd2b2908c5078234548d304030d7ebb2"} Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.464103 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-v472l" event={"ID":"caa99f47-6c6a-4642-b2eb-946507229c80","Type":"ContainerStarted","Data":"9d3f95fe8a8f6cdf026d7b270b802ff1506cd7694c102b2044aed9587a7e09ba"} Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.464811 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-v472l" Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.467548 4747 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-v472l container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.467614 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-v472l" podUID="caa99f47-6c6a-4642-b2eb-946507229c80" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.473016 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t44jd"] Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.534157 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzkjc" event={"ID":"7a6b61cd-3536-4763-8fe5-0a49f5a360b5","Type":"ContainerStarted","Data":"31f19a43ad5c5def240098e7f1444969512cb973c29aeed3da69fa39b1481933"} Dec 15 05:39:33 crc kubenswrapper[4747]: W1215 05:39:33.546786 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c0f0f4c_b174_4082_a173_60b46a8d83fc.slice/crio-625acf3e6124f0f31e0995e83cf81b8f1c38af4b598b220f0b6dc2c2b6d8d8bc WatchSource:0}: Error finding container 625acf3e6124f0f31e0995e83cf81b8f1c38af4b598b220f0b6dc2c2b6d8d8bc: Status 404 returned error can't find the container with id 625acf3e6124f0f31e0995e83cf81b8f1c38af4b598b220f0b6dc2c2b6d8d8bc Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.552301 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dmhzm"] Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.558290 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 15 05:39:33 crc kubenswrapper[4747]: E1215 05:39:33.559403 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-15 05:39:34.059388742 +0000 UTC m=+137.755900660 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.561863 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-snk8n" event={"ID":"38505957-41ec-47b6-86a0-1b7c2a1c853e","Type":"ContainerStarted","Data":"09feb13a348bb78e1c5c348a11330aa976fa911f8413b1fddc29522019f284c7"} Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.561900 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-snk8n" event={"ID":"38505957-41ec-47b6-86a0-1b7c2a1c853e","Type":"ContainerStarted","Data":"9f75dcaa7ad3180fc85652b904208ff3c884aaffcff19bda1d8fe4a8396c1979"} Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.574201 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-4txcp"] Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.578339 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-2sdgk" event={"ID":"17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f","Type":"ContainerStarted","Data":"9fe68dd80b9e33d6afbdccb3b8f3ff4ae50a93efc8283dcd2035d69533a62c33"} Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.578388 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-2sdgk" event={"ID":"17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f","Type":"ContainerStarted","Data":"d65f1f8d541e97790a8eb2f4b7f75e5e8159a795c1d56f5e6e2202b757c02f6b"} Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.617005 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-czrvw"] Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.642458 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q24qk" event={"ID":"94faa019-bb1f-48da-a0e8-395e8a7d13b4","Type":"ContainerStarted","Data":"9bd63c1cb0eaecda831cc0e028dfac4ea85f4b6b97421e77064bcd9d786e298c"} Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.644500 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zbfg" event={"ID":"57009fe6-55f5-42e6-8389-64796b3784c3","Type":"ContainerStarted","Data":"c0e259823e426d6abf7a04e3ec4df6d19c7c87ec50485e1d78e2d72dd52c256a"} Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.650249 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-45bgn" event={"ID":"dea1e983-d109-4be5-b1e2-8de9d982dfb7","Type":"ContainerStarted","Data":"54ff58a9b822c5b6dd676a483979ca378200ce5cde2fd5e075f01ff5a5f58742"} Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.660090 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:33 crc kubenswrapper[4747]: E1215 05:39:33.660362 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-15 05:39:34.160351414 +0000 UTC m=+137.856863331 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lzg4l" (UID: "db7a7a97-4354-4b54-afbc-e47fb8751316") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.669003 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-rhcmp" event={"ID":"4dbb5b21-8479-4f73-b71d-2f2ab2a22b82","Type":"ContainerStarted","Data":"3fc4b2e529f1bb979b1046289af8ad38bd204c9a477722faf08e92e6413e9698"} Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.679448 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-xlgbx"] Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.682101 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-wzmz8" event={"ID":"29341782-010b-4540-99a9-8cb20f667cef","Type":"ContainerStarted","Data":"8485f1ada66aa177011c2abd1ac18ddf644b7ee9aea882e7a9bdbde2b21282e3"} Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.694586 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bddlq" event={"ID":"8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d","Type":"ContainerStarted","Data":"6d68ac697094cc0bd7f3afcb31943ad19bf97c99a320beb1bd13bf2d0f5ab69d"} Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.694621 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bddlq" event={"ID":"8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d","Type":"ContainerStarted","Data":"e7949581146ae66ff2adef154a63b9f3d81d8794aafb96baedfa4e832d4e4ba3"} Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.695265 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bddlq" Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.703079 4747 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-bddlq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.703107 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bddlq" podUID="8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.706644 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-5jmx6" event={"ID":"eb0ca9f3-9ee8-4299-adf4-5220bf190a0c","Type":"ContainerStarted","Data":"a8478a42a7ee35ceb57cc2ff2e06f7dc03f693263a0a7aab3df41af9c34b775a"} Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.709835 4747 patch_prober.go:28] interesting pod/downloads-7954f5f757-5jmx6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.709894 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5jmx6" podUID="eb0ca9f3-9ee8-4299-adf4-5220bf190a0c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.718127 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-76d4n" event={"ID":"b3604b48-6e56-4470-aa4b-c0d1956b42d0","Type":"ContainerStarted","Data":"708bbade59ff97b052a6815e87c79ca8b9c9861435e810fd5085e9e193aa0844"} Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.722114 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-k72td" event={"ID":"3286e37f-50f9-4120-af33-d9e09be31e37","Type":"ContainerStarted","Data":"6c033dc8d0ce4904d97eade3c7fcc3b50c7d73c6b915f89f4df2057700f7e0e8"} Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.725295 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tpvkm" event={"ID":"126d37e8-f81f-445a-bf48-49d228d42748","Type":"ContainerStarted","Data":"4c802bae805209c556fe8dc06a6e3777e49affb7edfdeef57e123aaa24b0df62"} Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.736408 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-pkbhr" event={"ID":"fd2dfc21-3dfb-470f-8417-b7f3d1c8d75b","Type":"ContainerStarted","Data":"48802f61ea6da5189495c1639ec395ffb3504f372e9ec1cda1c62fbc0d9edaea"} Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.739882 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8phbj" event={"ID":"fe39e570-d08d-473e-a9d8-4aedffae0f04","Type":"ContainerStarted","Data":"99c72d9d8e210acb58c474b9cfd62bcb0a55f7f88a67ed9ce06088c0f9afb1a9"} Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.742418 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w46n4" event={"ID":"54c6f212-1947-47ff-a62a-dcc9b9559882","Type":"ContainerStarted","Data":"72b6d62047844281c76bfc944f1bce8124464da96dce78ee27dd8d970cba8c14"} Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.742515 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w46n4" event={"ID":"54c6f212-1947-47ff-a62a-dcc9b9559882","Type":"ContainerStarted","Data":"e5c239ac6203a95a701dcd8802787c983c5f59ed513fcfe2e076642797e5988e"} Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.752310 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-slnb9" event={"ID":"e4b0569a-a2ad-445f-ba73-bec2508e5c0b","Type":"ContainerStarted","Data":"84de83aa5d76d3daecf3aeadee59bd35569f6a6279ea108e474181c50ae9aef0"} Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.755740 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mvljn" event={"ID":"3aed04d0-4166-4ed3-bf2b-39e9598d0160","Type":"ContainerStarted","Data":"64076be02ac22b84a2d4558478908f21456f92a7df2110dd55d1ec6e9f0d1afb"} Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.756161 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-mvljn" Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.760490 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.760982 4747 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-mvljn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.761038 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-mvljn" podUID="3aed04d0-4166-4ed3-bf2b-39e9598d0160" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Dec 15 05:39:33 crc kubenswrapper[4747]: E1215 05:39:33.761672 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-15 05:39:34.261654015 +0000 UTC m=+137.958165933 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.763380 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xkbmm" event={"ID":"f139e81b-c534-4004-81b1-202a6b0e45f2","Type":"ContainerStarted","Data":"5cf828a0f89d42532ff089c74c7959236311840339fdad1bff146e293c724d56"} Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.768803 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mc527" event={"ID":"1335c7dc-dfe5-40d0-81b2-bc095c5a80c0","Type":"ContainerStarted","Data":"603bb08d6772e63bb5a69a3d15f312ccf284800d69f3a8362b890fb044673d19"} Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.772857 4747 generic.go:334] "Generic (PLEG): container finished" podID="5ec883a6-2265-4c56-97f1-98cd4a3aa084" containerID="1f329b1bdd26dd88bebb751783c6594121b0480eeeafc403af9f9f54086e0a56" exitCode=0 Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.772949 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-ml4rr" event={"ID":"5ec883a6-2265-4c56-97f1-98cd4a3aa084","Type":"ContainerDied","Data":"1f329b1bdd26dd88bebb751783c6594121b0480eeeafc403af9f9f54086e0a56"} Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.777019 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-jx62d" event={"ID":"eba25f55-9f7e-43cc-a111-a5e4184c037e","Type":"ContainerStarted","Data":"598b58e23714e9a5ed580a445c8059915b89bac63c41a984995d09e258f2bad5"} Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.786964 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-q6zkl" event={"ID":"b7e78311-fd22-49fa-a423-9037fc15aaa5","Type":"ContainerStarted","Data":"d4c0db6e5ed77f9410c4a799c82ba0f185d87d56789e6f357f13a3e47d56f08c"} Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.796442 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-cjc2b" Dec 15 05:39:33 crc kubenswrapper[4747]: W1215 05:39:33.798248 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c8ebe95_b54a_4271_b6ad_a0d081bc93a7.slice/crio-051c44b50eb107b189052fc3f37ce5c62bdd8607957527ecb3d3a16dd48a8db2 WatchSource:0}: Error finding container 051c44b50eb107b189052fc3f37ce5c62bdd8607957527ecb3d3a16dd48a8db2: Status 404 returned error can't find the container with id 051c44b50eb107b189052fc3f37ce5c62bdd8607957527ecb3d3a16dd48a8db2 Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.856716 4747 patch_prober.go:28] interesting pod/router-default-5444994796-gvbxq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 15 05:39:33 crc kubenswrapper[4747]: [-]has-synced failed: reason withheld Dec 15 05:39:33 crc kubenswrapper[4747]: [+]process-running ok Dec 15 05:39:33 crc kubenswrapper[4747]: healthz check failed Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.856781 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gvbxq" podUID="f9755e7f-72e0-4b8a-94c2-6702dec42d0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.863747 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:33 crc kubenswrapper[4747]: E1215 05:39:33.868875 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-15 05:39:34.368855335 +0000 UTC m=+138.065367253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lzg4l" (UID: "db7a7a97-4354-4b54-afbc-e47fb8751316") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.950671 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bddlq" podStartSLOduration=119.950654427 podStartE2EDuration="1m59.950654427s" podCreationTimestamp="2025-12-15 05:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:39:33.897414861 +0000 UTC m=+137.593926778" watchObservedRunningTime="2025-12-15 05:39:33.950654427 +0000 UTC m=+137.647166344" Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.966425 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 15 05:39:33 crc kubenswrapper[4747]: E1215 05:39:33.968115 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-15 05:39:34.468087954 +0000 UTC m=+138.164599871 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.983951 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-2sdgk" podStartSLOduration=119.983910523 podStartE2EDuration="1m59.983910523s" podCreationTimestamp="2025-12-15 05:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:39:33.950208819 +0000 UTC m=+137.646720736" watchObservedRunningTime="2025-12-15 05:39:33.983910523 +0000 UTC m=+137.680422429" Dec 15 05:39:33 crc kubenswrapper[4747]: I1215 05:39:33.985164 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-rhcmp" podStartSLOduration=4.9851549330000005 podStartE2EDuration="4.985154933s" podCreationTimestamp="2025-12-15 05:39:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:39:33.977319139 +0000 UTC m=+137.673831047" watchObservedRunningTime="2025-12-15 05:39:33.985154933 +0000 UTC m=+137.681666849" Dec 15 05:39:34 crc kubenswrapper[4747]: I1215 05:39:34.026157 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-76d4n" podStartSLOduration=120.026137244 podStartE2EDuration="2m0.026137244s" podCreationTimestamp="2025-12-15 05:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:39:34.024287426 +0000 UTC m=+137.720799333" watchObservedRunningTime="2025-12-15 05:39:34.026137244 +0000 UTC m=+137.722649161" Dec 15 05:39:34 crc kubenswrapper[4747]: I1215 05:39:34.068880 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:34 crc kubenswrapper[4747]: E1215 05:39:34.069356 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-15 05:39:34.569343807 +0000 UTC m=+138.265855724 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lzg4l" (UID: "db7a7a97-4354-4b54-afbc-e47fb8751316") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:34 crc kubenswrapper[4747]: I1215 05:39:34.069376 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-mvljn" podStartSLOduration=120.069351151 podStartE2EDuration="2m0.069351151s" podCreationTimestamp="2025-12-15 05:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:39:34.068124134 +0000 UTC m=+137.764636051" watchObservedRunningTime="2025-12-15 05:39:34.069351151 +0000 UTC m=+137.765863067" Dec 15 05:39:34 crc kubenswrapper[4747]: I1215 05:39:34.093987 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-cjc2b" podStartSLOduration=120.09397179 podStartE2EDuration="2m0.09397179s" podCreationTimestamp="2025-12-15 05:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:39:34.091839913 +0000 UTC m=+137.788351829" watchObservedRunningTime="2025-12-15 05:39:34.09397179 +0000 UTC m=+137.790483708" Dec 15 05:39:34 crc kubenswrapper[4747]: I1215 05:39:34.148997 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-wzmz8" podStartSLOduration=120.148978609 podStartE2EDuration="2m0.148978609s" podCreationTimestamp="2025-12-15 05:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:39:34.143709453 +0000 UTC m=+137.840221370" watchObservedRunningTime="2025-12-15 05:39:34.148978609 +0000 UTC m=+137.845490526" Dec 15 05:39:34 crc kubenswrapper[4747]: I1215 05:39:34.173775 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 15 05:39:34 crc kubenswrapper[4747]: E1215 05:39:34.174291 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-15 05:39:34.674274739 +0000 UTC m=+138.370786646 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:34 crc kubenswrapper[4747]: I1215 05:39:34.222398 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-g46rv" podStartSLOduration=120.22237856 podStartE2EDuration="2m0.22237856s" podCreationTimestamp="2025-12-15 05:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:39:34.18904051 +0000 UTC m=+137.885552428" watchObservedRunningTime="2025-12-15 05:39:34.22237856 +0000 UTC m=+137.918890477" Dec 15 05:39:34 crc kubenswrapper[4747]: I1215 05:39:34.235114 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-slnb9" podStartSLOduration=120.235098816 podStartE2EDuration="2m0.235098816s" podCreationTimestamp="2025-12-15 05:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:39:34.221873661 +0000 UTC m=+137.918385578" watchObservedRunningTime="2025-12-15 05:39:34.235098816 +0000 UTC m=+137.931610732" Dec 15 05:39:34 crc kubenswrapper[4747]: I1215 05:39:34.275774 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:34 crc kubenswrapper[4747]: E1215 05:39:34.276227 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-15 05:39:34.776214558 +0000 UTC m=+138.472726474 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lzg4l" (UID: "db7a7a97-4354-4b54-afbc-e47fb8751316") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:34 crc kubenswrapper[4747]: I1215 05:39:34.295293 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-gvbxq" podStartSLOduration=120.295272549 podStartE2EDuration="2m0.295272549s" podCreationTimestamp="2025-12-15 05:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:39:34.273867776 +0000 UTC m=+137.970379693" watchObservedRunningTime="2025-12-15 05:39:34.295272549 +0000 UTC m=+137.991784465" Dec 15 05:39:34 crc kubenswrapper[4747]: I1215 05:39:34.313294 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:39:34 crc kubenswrapper[4747]: I1215 05:39:34.348460 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rcnrf" podStartSLOduration=120.348438477 podStartE2EDuration="2m0.348438477s" podCreationTimestamp="2025-12-15 05:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:39:34.348156446 +0000 UTC m=+138.044668363" watchObservedRunningTime="2025-12-15 05:39:34.348438477 +0000 UTC m=+138.044950395" Dec 15 05:39:34 crc kubenswrapper[4747]: I1215 05:39:34.378599 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 15 05:39:34 crc kubenswrapper[4747]: E1215 05:39:34.378895 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-15 05:39:34.878873478 +0000 UTC m=+138.575385395 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:34 crc kubenswrapper[4747]: I1215 05:39:34.379101 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:34 crc kubenswrapper[4747]: E1215 05:39:34.379370 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-15 05:39:34.879361326 +0000 UTC m=+138.575873242 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lzg4l" (UID: "db7a7a97-4354-4b54-afbc-e47fb8751316") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:34 crc kubenswrapper[4747]: I1215 05:39:34.414015 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-5jmx6" podStartSLOduration=120.413995471 podStartE2EDuration="2m0.413995471s" podCreationTimestamp="2025-12-15 05:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:39:34.378060379 +0000 UTC m=+138.074572306" watchObservedRunningTime="2025-12-15 05:39:34.413995471 +0000 UTC m=+138.110507389" Dec 15 05:39:34 crc kubenswrapper[4747]: I1215 05:39:34.475918 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-v472l" podStartSLOduration=120.475897105 podStartE2EDuration="2m0.475897105s" podCreationTimestamp="2025-12-15 05:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:39:34.413512735 +0000 UTC m=+138.110024651" watchObservedRunningTime="2025-12-15 05:39:34.475897105 +0000 UTC m=+138.172409021" Dec 15 05:39:34 crc kubenswrapper[4747]: I1215 05:39:34.480449 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 15 05:39:34 crc kubenswrapper[4747]: E1215 05:39:34.480588 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-15 05:39:34.980568386 +0000 UTC m=+138.677080303 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:34 crc kubenswrapper[4747]: I1215 05:39:34.480823 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:34 crc kubenswrapper[4747]: E1215 05:39:34.481443 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-15 05:39:34.981433002 +0000 UTC m=+138.677944919 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lzg4l" (UID: "db7a7a97-4354-4b54-afbc-e47fb8751316") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:34 crc kubenswrapper[4747]: I1215 05:39:34.583219 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 15 05:39:34 crc kubenswrapper[4747]: E1215 05:39:34.596070 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-15 05:39:35.096034947 +0000 UTC m=+138.792546864 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:34 crc kubenswrapper[4747]: I1215 05:39:34.691752 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:34 crc kubenswrapper[4747]: E1215 05:39:34.692225 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-15 05:39:35.192208364 +0000 UTC m=+138.888720282 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lzg4l" (UID: "db7a7a97-4354-4b54-afbc-e47fb8751316") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:34 crc kubenswrapper[4747]: I1215 05:39:34.797546 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 15 05:39:34 crc kubenswrapper[4747]: E1215 05:39:34.798023 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-15 05:39:35.298006937 +0000 UTC m=+138.994518855 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:34 crc kubenswrapper[4747]: I1215 05:39:34.798152 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:34 crc kubenswrapper[4747]: E1215 05:39:34.798466 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-15 05:39:35.298459168 +0000 UTC m=+138.994971085 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lzg4l" (UID: "db7a7a97-4354-4b54-afbc-e47fb8751316") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:34 crc kubenswrapper[4747]: I1215 05:39:34.841087 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-jx62d" event={"ID":"eba25f55-9f7e-43cc-a111-a5e4184c037e","Type":"ContainerStarted","Data":"c8b327a84173d0f6c7c3967eb9869f8ee81d666ba829681b1fc1bdbf0408d011"} Dec 15 05:39:34 crc kubenswrapper[4747]: I1215 05:39:34.849150 4747 patch_prober.go:28] interesting pod/router-default-5444994796-gvbxq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 15 05:39:34 crc kubenswrapper[4747]: [-]has-synced failed: reason withheld Dec 15 05:39:34 crc kubenswrapper[4747]: [+]process-running ok Dec 15 05:39:34 crc kubenswrapper[4747]: healthz check failed Dec 15 05:39:34 crc kubenswrapper[4747]: I1215 05:39:34.849231 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gvbxq" podUID="f9755e7f-72e0-4b8a-94c2-6702dec42d0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 15 05:39:34 crc kubenswrapper[4747]: I1215 05:39:34.850120 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jnf5v" event={"ID":"7e22aff6-5dc0-454e-b980-d39cfcd08ba6","Type":"ContainerStarted","Data":"2ad37ff987c577e2a1c4c4c4ea5303845448d6319eea34f0126e46a07e044316"} Dec 15 05:39:34 crc kubenswrapper[4747]: I1215 05:39:34.850175 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jnf5v" event={"ID":"7e22aff6-5dc0-454e-b980-d39cfcd08ba6","Type":"ContainerStarted","Data":"248728c7f822e707dbfb41ed192c3c3d4a19b16df7b10642fa78378ccd00285f"} Dec 15 05:39:34 crc kubenswrapper[4747]: I1215 05:39:34.888134 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-rhcmp" event={"ID":"4dbb5b21-8479-4f73-b71d-2f2ab2a22b82","Type":"ContainerStarted","Data":"d3fd89d3903c94ac52fa3cf014f218de6b830bf2ed66b85d01ffef82dd6df8e2"} Dec 15 05:39:34 crc kubenswrapper[4747]: I1215 05:39:34.899210 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 15 05:39:34 crc kubenswrapper[4747]: E1215 05:39:34.899389 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-15 05:39:35.399362697 +0000 UTC m=+139.095874615 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:34 crc kubenswrapper[4747]: I1215 05:39:34.899484 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:34 crc kubenswrapper[4747]: E1215 05:39:34.899818 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-15 05:39:35.399802214 +0000 UTC m=+139.096314131 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lzg4l" (UID: "db7a7a97-4354-4b54-afbc-e47fb8751316") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:34 crc kubenswrapper[4747]: I1215 05:39:34.930807 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mvljn" event={"ID":"3aed04d0-4166-4ed3-bf2b-39e9598d0160","Type":"ContainerStarted","Data":"bc0558e63876a6f899739254fec5f7486396a430a3468c432efc2e1e673235bd"} Dec 15 05:39:34 crc kubenswrapper[4747]: I1215 05:39:34.932177 4747 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-mvljn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Dec 15 05:39:34 crc kubenswrapper[4747]: I1215 05:39:34.932222 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-mvljn" podUID="3aed04d0-4166-4ed3-bf2b-39e9598d0160" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Dec 15 05:39:34 crc kubenswrapper[4747]: I1215 05:39:34.951892 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mc527" event={"ID":"1335c7dc-dfe5-40d0-81b2-bc095c5a80c0","Type":"ContainerStarted","Data":"21ea3110921d2a607d61609da147572c89a7ddbf980bfd9b889993235e5e5902"} Dec 15 05:39:34 crc kubenswrapper[4747]: I1215 05:39:34.954866 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mc527" Dec 15 05:39:34 crc kubenswrapper[4747]: I1215 05:39:34.970281 4747 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-mc527 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Dec 15 05:39:34 crc kubenswrapper[4747]: I1215 05:39:34.970324 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mc527" podUID="1335c7dc-dfe5-40d0-81b2-bc095c5a80c0" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Dec 15 05:39:34 crc kubenswrapper[4747]: I1215 05:39:34.970943 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29429610-bgnz8" event={"ID":"ebb703f6-cf81-4ee4-9ec7-d58fa71e7f62","Type":"ContainerStarted","Data":"39ca4119249466f267e77a7b4e3a28afc05d59998b059bf159bd96dbaeb55362"} Dec 15 05:39:34 crc kubenswrapper[4747]: I1215 05:39:34.989608 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzkjc" event={"ID":"7a6b61cd-3536-4763-8fe5-0a49f5a360b5","Type":"ContainerStarted","Data":"1e8827a20b7829e7fe89c56488edf038b99f1bc22b6ac8dcf8753c859956b4ca"} Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.002415 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 15 05:39:35 crc kubenswrapper[4747]: E1215 05:39:35.003434 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-15 05:39:35.503417142 +0000 UTC m=+139.199929059 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.016236 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29429610-bgnz8" podStartSLOduration=121.016220294 podStartE2EDuration="2m1.016220294s" podCreationTimestamp="2025-12-15 05:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:39:35.015220995 +0000 UTC m=+138.711732912" watchObservedRunningTime="2025-12-15 05:39:35.016220294 +0000 UTC m=+138.712732211" Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.016432 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mc527" podStartSLOduration=121.016426682 podStartE2EDuration="2m1.016426682s" podCreationTimestamp="2025-12-15 05:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:39:34.983255476 +0000 UTC m=+138.679767392" watchObservedRunningTime="2025-12-15 05:39:35.016426682 +0000 UTC m=+138.712938598" Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.030211 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-wzmz8" event={"ID":"29341782-010b-4540-99a9-8cb20f667cef","Type":"ContainerStarted","Data":"6820afc584603d23fbf04c4bc4158e893a87db87e99b02223884a68edd3316eb"} Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.052836 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dmhzm" event={"ID":"43660579-30f6-416b-b60a-db19d0f244f8","Type":"ContainerStarted","Data":"49009d1a0ea5eaca9339ab3e1d29d79c914f601f746ad22ea311cc8a97f5091c"} Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.067825 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-czrvw" event={"ID":"9c8ebe95-b54a-4271-b6ad-a0d081bc93a7","Type":"ContainerStarted","Data":"051c44b50eb107b189052fc3f37ce5c62bdd8607957527ecb3d3a16dd48a8db2"} Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.069637 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-pkbhr" event={"ID":"fd2dfc21-3dfb-470f-8417-b7f3d1c8d75b","Type":"ContainerStarted","Data":"09b55df7f4963ca7bb57dcce7faf2ac53c9cccd176028a399cd6f14d39ea9c50"} Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.080869 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-clqdc" event={"ID":"9cdc82fd-2b58-4bbf-8d67-5e66cf80ebb8","Type":"ContainerStarted","Data":"90f1e7e00d15a58a35b407fbabb0ecb837ce31fbbe880f08cab1fbb3a57eaaa4"} Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.095599 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kzkjc" podStartSLOduration=121.095577866 podStartE2EDuration="2m1.095577866s" podCreationTimestamp="2025-12-15 05:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:39:35.041329832 +0000 UTC m=+138.737841750" watchObservedRunningTime="2025-12-15 05:39:35.095577866 +0000 UTC m=+138.792089783" Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.100178 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8phbj" event={"ID":"fe39e570-d08d-473e-a9d8-4aedffae0f04","Type":"ContainerStarted","Data":"66df8c24ec4809fc98338863815c446f404b57f285587266b2a984b2d7f97358"} Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.106707 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:35 crc kubenswrapper[4747]: E1215 05:39:35.108110 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-15 05:39:35.60809602 +0000 UTC m=+139.304607937 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lzg4l" (UID: "db7a7a97-4354-4b54-afbc-e47fb8751316") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.114527 4747 generic.go:334] "Generic (PLEG): container finished" podID="57009fe6-55f5-42e6-8389-64796b3784c3" containerID="9b61c6b52f4e201237612a3654883df044ca7c434a97f3df9132777431069d2b" exitCode=0 Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.114874 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zbfg" event={"ID":"57009fe6-55f5-42e6-8389-64796b3784c3","Type":"ContainerDied","Data":"9b61c6b52f4e201237612a3654883df044ca7c434a97f3df9132777431069d2b"} Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.137952 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-clqdc" podStartSLOduration=121.137936384 podStartE2EDuration="2m1.137936384s" podCreationTimestamp="2025-12-15 05:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:39:35.136212392 +0000 UTC m=+138.832724310" watchObservedRunningTime="2025-12-15 05:39:35.137936384 +0000 UTC m=+138.834448301" Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.138521 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dmhzm" podStartSLOduration=121.138515072 podStartE2EDuration="2m1.138515072s" podCreationTimestamp="2025-12-15 05:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:39:35.094291848 +0000 UTC m=+138.790803765" watchObservedRunningTime="2025-12-15 05:39:35.138515072 +0000 UTC m=+138.835026990" Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.144089 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-75dh6" event={"ID":"feafc60a-2dff-433e-ad58-01dcc0f23974","Type":"ContainerStarted","Data":"e8d4a8f48a65b31cc4f826181d1690547c562dd4302486746478320c46812a9b"} Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.144134 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-75dh6" Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.152005 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mxdtl" event={"ID":"9560d6f0-3fc0-483c-a3a7-87e022468221","Type":"ContainerStarted","Data":"e71eef6ca38f194721db659f9f46379a5fcdb721f3706e6173c5fcfb7b700024"} Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.152047 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mxdtl" event={"ID":"9560d6f0-3fc0-483c-a3a7-87e022468221","Type":"ContainerStarted","Data":"9bf803912dd6adc4fec1d07d09b9077ac9d15597fef156d13501096d3840e810"} Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.171651 4747 patch_prober.go:28] interesting pod/console-operator-58897d9998-75dh6 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.171682 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-75dh6" podUID="feafc60a-2dff-433e-ad58-01dcc0f23974" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.186949 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tpvkm" event={"ID":"126d37e8-f81f-445a-bf48-49d228d42748","Type":"ContainerStarted","Data":"a97a57fd6e064a7c79d6f2d117c5fdb8c4a720fdae7d7d82c492ee55a73f74f9"} Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.207941 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 15 05:39:35 crc kubenswrapper[4747]: E1215 05:39:35.209153 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-15 05:39:35.709136449 +0000 UTC m=+139.405648366 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.213860 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mxdtl" podStartSLOduration=121.213845011 podStartE2EDuration="2m1.213845011s" podCreationTimestamp="2025-12-15 05:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:39:35.212446592 +0000 UTC m=+138.908958509" watchObservedRunningTime="2025-12-15 05:39:35.213845011 +0000 UTC m=+138.910356927" Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.227381 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-76d4n" event={"ID":"b3604b48-6e56-4470-aa4b-c0d1956b42d0","Type":"ContainerStarted","Data":"f272e866bb18384883bb2c16e1922ce449404c5b302838b3f5a44cb96205012a"} Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.253370 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-4txcp" event={"ID":"4bbed563-3f20-42a1-949b-d5490500299b","Type":"ContainerStarted","Data":"8c4f17f0d00c70042993aeb2e704c3e32b3e8d539a5ad5e2ddbb1c8206088e8e"} Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.260526 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-75dh6" podStartSLOduration=121.260510758 podStartE2EDuration="2m1.260510758s" podCreationTimestamp="2025-12-15 05:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:39:35.259242854 +0000 UTC m=+138.955754772" watchObservedRunningTime="2025-12-15 05:39:35.260510758 +0000 UTC m=+138.957022676" Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.306489 4747 generic.go:334] "Generic (PLEG): container finished" podID="9c3f3cf2-3751-4315-bcf9-f42a5650c32b" containerID="558bd7eafae30374af9294c23ba65dd151b859315a02cbd942be34b11e913e89" exitCode=0 Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.307403 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gr6p7" event={"ID":"9c3f3cf2-3751-4315-bcf9-f42a5650c32b","Type":"ContainerDied","Data":"558bd7eafae30374af9294c23ba65dd151b859315a02cbd942be34b11e913e89"} Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.310834 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:35 crc kubenswrapper[4747]: E1215 05:39:35.312182 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-15 05:39:35.812170254 +0000 UTC m=+139.508682171 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lzg4l" (UID: "db7a7a97-4354-4b54-afbc-e47fb8751316") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.336891 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t44jd" event={"ID":"5c0f0f4c-b174-4082-a173-60b46a8d83fc","Type":"ContainerStarted","Data":"625acf3e6124f0f31e0995e83cf81b8f1c38af4b598b220f0b6dc2c2b6d8d8bc"} Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.337549 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t44jd" Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.338964 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tpvkm" podStartSLOduration=121.338920127 podStartE2EDuration="2m1.338920127s" podCreationTimestamp="2025-12-15 05:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:39:35.338026747 +0000 UTC m=+139.034538664" watchObservedRunningTime="2025-12-15 05:39:35.338920127 +0000 UTC m=+139.035432043" Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.339066 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-4txcp" podStartSLOduration=6.339061983 podStartE2EDuration="6.339061983s" podCreationTimestamp="2025-12-15 05:39:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:39:35.290804874 +0000 UTC m=+138.987316791" watchObservedRunningTime="2025-12-15 05:39:35.339061983 +0000 UTC m=+139.035573900" Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.375040 4747 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-t44jd container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:5443/healthz\": dial tcp 10.217.0.36:5443: connect: connection refused" start-of-body= Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.375118 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t44jd" podUID="5c0f0f4c-b174-4082-a173-60b46a8d83fc" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.36:5443/healthz\": dial tcp 10.217.0.36:5443: connect: connection refused" Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.405282 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-snk8n" event={"ID":"38505957-41ec-47b6-86a0-1b7c2a1c853e","Type":"ContainerStarted","Data":"77cc12e16031e1b037666e1b365dfefa77879eeb350cbdaee20186dae331190b"} Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.417496 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 15 05:39:35 crc kubenswrapper[4747]: E1215 05:39:35.417769 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-15 05:39:35.917742291 +0000 UTC m=+139.614254209 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.417910 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:35 crc kubenswrapper[4747]: E1215 05:39:35.418524 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-15 05:39:35.918507069 +0000 UTC m=+139.615018987 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lzg4l" (UID: "db7a7a97-4354-4b54-afbc-e47fb8751316") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.427535 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-45bgn" event={"ID":"dea1e983-d109-4be5-b1e2-8de9d982dfb7","Type":"ContainerStarted","Data":"ef5189cecae87c6ecaa8054e7d6246225d59eb4e5dc6f0a0cdad8066ec4eec66"} Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.446938 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t44jd" podStartSLOduration=121.446909088 podStartE2EDuration="2m1.446909088s" podCreationTimestamp="2025-12-15 05:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:39:35.444088354 +0000 UTC m=+139.140600272" watchObservedRunningTime="2025-12-15 05:39:35.446909088 +0000 UTC m=+139.143421005" Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.461047 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-q6zkl" event={"ID":"b7e78311-fd22-49fa-a423-9037fc15aaa5","Type":"ContainerStarted","Data":"3220c83611e7552da59186a0470d7a675d8a064ec0613f5256a74fb6ca3b1202"} Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.484503 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-snk8n" podStartSLOduration=121.484486147 podStartE2EDuration="2m1.484486147s" podCreationTimestamp="2025-12-15 05:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:39:35.464205547 +0000 UTC m=+139.160717464" watchObservedRunningTime="2025-12-15 05:39:35.484486147 +0000 UTC m=+139.180998074" Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.490219 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7g65v" event={"ID":"4ac6e673-f966-4177-84a1-440b3989f4ab","Type":"ContainerStarted","Data":"1f47cae2b18ca225c7aa0702b0a614a2bc415f937db454f7b1cb6bd7f0a50a4c"} Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.509313 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w46n4" event={"ID":"54c6f212-1947-47ff-a62a-dcc9b9559882","Type":"ContainerStarted","Data":"1a072275938c3e38918c6f6d22775629a4469b986c2ac3b369ff76ffc30954af"} Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.511472 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-45bgn" podStartSLOduration=121.511462075 podStartE2EDuration="2m1.511462075s" podCreationTimestamp="2025-12-15 05:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:39:35.484302121 +0000 UTC m=+139.180814039" watchObservedRunningTime="2025-12-15 05:39:35.511462075 +0000 UTC m=+139.207973992" Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.520588 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 15 05:39:35 crc kubenswrapper[4747]: E1215 05:39:35.521783 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-15 05:39:36.02176162 +0000 UTC m=+139.718273537 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.548550 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7g65v" podStartSLOduration=121.548530447 podStartE2EDuration="2m1.548530447s" podCreationTimestamp="2025-12-15 05:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:39:35.547814482 +0000 UTC m=+139.244326399" watchObservedRunningTime="2025-12-15 05:39:35.548530447 +0000 UTC m=+139.245042365" Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.549086 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-q6zkl" podStartSLOduration=121.549081254 podStartE2EDuration="2m1.549081254s" podCreationTimestamp="2025-12-15 05:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:39:35.512217516 +0000 UTC m=+139.208729433" watchObservedRunningTime="2025-12-15 05:39:35.549081254 +0000 UTC m=+139.245593170" Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.558117 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mpvdj" event={"ID":"fdff8a05-dbcd-4bb1-9b57-ec2c9bf02d0e","Type":"ContainerStarted","Data":"23295019c9d40106f47c2fdd780f23e4ffcfbdb4991bd59485e22d47334d06af"} Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.558727 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mpvdj" Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.603541 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xlgbx" event={"ID":"07bf5f61-69f9-4b0b-9f0d-f7c8e1b8379b","Type":"ContainerStarted","Data":"a965224ccf2e63d6f7296b9a5fd716c92fb28f011b7932eeca62dd77153d1540"} Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.608224 4747 patch_prober.go:28] interesting pod/downloads-7954f5f757-5jmx6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.608259 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5jmx6" podUID="eb0ca9f3-9ee8-4299-adf4-5220bf190a0c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.610792 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w46n4" podStartSLOduration=121.610780966 podStartE2EDuration="2m1.610780966s" podCreationTimestamp="2025-12-15 05:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:39:35.608961395 +0000 UTC m=+139.305473312" watchObservedRunningTime="2025-12-15 05:39:35.610780966 +0000 UTC m=+139.307292874" Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.621716 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:35 crc kubenswrapper[4747]: E1215 05:39:35.623060 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-15 05:39:36.123048651 +0000 UTC m=+139.819560568 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lzg4l" (UID: "db7a7a97-4354-4b54-afbc-e47fb8751316") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.626682 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bddlq" Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.627281 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-v472l" Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.642396 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xlgbx" podStartSLOduration=121.642381538 podStartE2EDuration="2m1.642381538s" podCreationTimestamp="2025-12-15 05:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:39:35.641255011 +0000 UTC m=+139.337766928" watchObservedRunningTime="2025-12-15 05:39:35.642381538 +0000 UTC m=+139.338893455" Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.679818 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mpvdj" podStartSLOduration=121.679793638 podStartE2EDuration="2m1.679793638s" podCreationTimestamp="2025-12-15 05:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:39:35.678945964 +0000 UTC m=+139.375457881" watchObservedRunningTime="2025-12-15 05:39:35.679793638 +0000 UTC m=+139.376305555" Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.723991 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 15 05:39:35 crc kubenswrapper[4747]: E1215 05:39:35.725261 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-15 05:39:36.225213804 +0000 UTC m=+139.921725720 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.825890 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:35 crc kubenswrapper[4747]: E1215 05:39:35.826206 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-15 05:39:36.326196051 +0000 UTC m=+140.022707968 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lzg4l" (UID: "db7a7a97-4354-4b54-afbc-e47fb8751316") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.853261 4747 patch_prober.go:28] interesting pod/router-default-5444994796-gvbxq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 15 05:39:35 crc kubenswrapper[4747]: [-]has-synced failed: reason withheld Dec 15 05:39:35 crc kubenswrapper[4747]: [+]process-running ok Dec 15 05:39:35 crc kubenswrapper[4747]: healthz check failed Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.853303 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gvbxq" podUID="f9755e7f-72e0-4b8a-94c2-6702dec42d0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 15 05:39:35 crc kubenswrapper[4747]: I1215 05:39:35.926254 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 15 05:39:35 crc kubenswrapper[4747]: E1215 05:39:35.926588 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-15 05:39:36.426576238 +0000 UTC m=+140.123088155 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.028164 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:36 crc kubenswrapper[4747]: E1215 05:39:36.028583 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-15 05:39:36.528545081 +0000 UTC m=+140.225056998 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lzg4l" (UID: "db7a7a97-4354-4b54-afbc-e47fb8751316") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.128848 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 15 05:39:36 crc kubenswrapper[4747]: E1215 05:39:36.129250 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-15 05:39:36.629235411 +0000 UTC m=+140.325747327 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.230823 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:36 crc kubenswrapper[4747]: E1215 05:39:36.231158 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-15 05:39:36.731141305 +0000 UTC m=+140.427653223 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lzg4l" (UID: "db7a7a97-4354-4b54-afbc-e47fb8751316") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.334374 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 15 05:39:36 crc kubenswrapper[4747]: E1215 05:39:36.334507 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-15 05:39:36.834489412 +0000 UTC m=+140.531001329 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.334952 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:36 crc kubenswrapper[4747]: E1215 05:39:36.335479 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-15 05:39:36.835458553 +0000 UTC m=+140.531970471 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lzg4l" (UID: "db7a7a97-4354-4b54-afbc-e47fb8751316") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.399546 4747 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.436457 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 15 05:39:36 crc kubenswrapper[4747]: E1215 05:39:36.436627 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-15 05:39:36.936605291 +0000 UTC m=+140.633117208 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.436964 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:36 crc kubenswrapper[4747]: E1215 05:39:36.437308 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-15 05:39:36.937298915 +0000 UTC m=+140.633810832 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lzg4l" (UID: "db7a7a97-4354-4b54-afbc-e47fb8751316") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.538530 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 15 05:39:36 crc kubenswrapper[4747]: E1215 05:39:36.538703 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-15 05:39:37.038671176 +0000 UTC m=+140.735183093 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.538745 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:36 crc kubenswrapper[4747]: E1215 05:39:36.539064 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-15 05:39:37.039054948 +0000 UTC m=+140.735566865 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lzg4l" (UID: "db7a7a97-4354-4b54-afbc-e47fb8751316") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.608022 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tpvkm" event={"ID":"126d37e8-f81f-445a-bf48-49d228d42748","Type":"ContainerStarted","Data":"265f033346862ea89c78852a0fbe98a74e1f0c57e3bea16778d4b2ff869669cd"} Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.609952 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t44jd" event={"ID":"5c0f0f4c-b174-4082-a173-60b46a8d83fc","Type":"ContainerStarted","Data":"0079032c9f79e29d12940092135679b1203c175788a7c393993c9f4a6bbea42d"} Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.611605 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xkbmm" event={"ID":"f139e81b-c534-4004-81b1-202a6b0e45f2","Type":"ContainerStarted","Data":"28cf5a3719d0df1ee0d8d1ff2325636891cfe34fe92310d55793eb50756781ec"} Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.613424 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mxdtl" event={"ID":"9560d6f0-3fc0-483c-a3a7-87e022468221","Type":"ContainerStarted","Data":"65957c672b277b9f893988081b7f4c03bdfc364e5b81e4060ecd3f9976a14aaf"} Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.615070 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-k72td" event={"ID":"3286e37f-50f9-4120-af33-d9e09be31e37","Type":"ContainerStarted","Data":"a5a86596c82c1657dda8d80f4eb257f66591df252565d85c0bf5b34969c8a574"} Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.615109 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-k72td" event={"ID":"3286e37f-50f9-4120-af33-d9e09be31e37","Type":"ContainerStarted","Data":"fb9a0c12a52d1e28b39216b08a7c2fa2b6b4a577c5b77a7847430a914eb9deed"} Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.615439 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-k72td" Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.617379 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zbfg" event={"ID":"57009fe6-55f5-42e6-8389-64796b3784c3","Type":"ContainerStarted","Data":"1c51f44b7275e98658093c6c3a1bcc553ab2b5013e4421fb09f86201042aa61c"} Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.619918 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dmhzm" event={"ID":"43660579-30f6-416b-b60a-db19d0f244f8","Type":"ContainerStarted","Data":"88c8e282d995d4411776fd401a0a0db0195da82b956ee0b4880862b1387c9fb7"} Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.621023 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q24qk" event={"ID":"94faa019-bb1f-48da-a0e8-395e8a7d13b4","Type":"ContainerStarted","Data":"5de6203d86536e7ea5fe10371c92e7b0a19ed87107f27a51beb7637fd161a22e"} Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.621682 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q24qk" Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.622774 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-czrvw" event={"ID":"9c8ebe95-b54a-4271-b6ad-a0d081bc93a7","Type":"ContainerStarted","Data":"ef2fb56043ee82e58c49a25b04fe826d0131a4eb34545f8fb2cbdc346e2cba4d"} Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.625183 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-ml4rr" event={"ID":"5ec883a6-2265-4c56-97f1-98cd4a3aa084","Type":"ContainerStarted","Data":"c0d9d512669fcdd370de56e81973a7d98e8e7bae22d5b137e5b20a3b676ed5a2"} Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.625208 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-ml4rr" event={"ID":"5ec883a6-2265-4c56-97f1-98cd4a3aa084","Type":"ContainerStarted","Data":"48ec0c59191f58a187695f77981c83e9056d89394840c3d2abe326417f32bb24"} Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.635277 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-4txcp" event={"ID":"4bbed563-3f20-42a1-949b-d5490500299b","Type":"ContainerStarted","Data":"901382635a004d6bb547e025181533acc3058646f80c64a9cc0c54452911bf31"} Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.635325 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-pkbhr" event={"ID":"fd2dfc21-3dfb-470f-8417-b7f3d1c8d75b","Type":"ContainerStarted","Data":"bcadb62ae39c7c4f7f31d0c363c2102de97c8f0bc904867b3b61adff4eecca0b"} Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.637449 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jnf5v" event={"ID":"7e22aff6-5dc0-454e-b980-d39cfcd08ba6","Type":"ContainerStarted","Data":"d0285951ea886ef723b36a8b7dd5d7444394908d0a937117a5f7b381f9b2c8fc"} Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.637566 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q24qk" Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.638966 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gr6p7" event={"ID":"9c3f3cf2-3751-4315-bcf9-f42a5650c32b","Type":"ContainerStarted","Data":"60d31bd557f3a7a87e09dcca15190b4767345b7ac099371b8072b7820440de67"} Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.639562 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gr6p7" Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.639893 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 15 05:39:36 crc kubenswrapper[4747]: E1215 05:39:36.640386 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-15 05:39:37.140372928 +0000 UTC m=+140.836884845 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.645863 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-jx62d" event={"ID":"eba25f55-9f7e-43cc-a111-a5e4184c037e","Type":"ContainerStarted","Data":"6dec66acb590e5af04c308ebacf6ef3bca1f285e7a9d7f8d6d79fe02680cca56"} Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.646763 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xlgbx" event={"ID":"07bf5f61-69f9-4b0b-9f0d-f7c8e1b8379b","Type":"ContainerStarted","Data":"35a193f2e86307fb8fab2d82ac5d2455f2351665ceef7b3b1e0d926f969d43b7"} Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.647021 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xkbmm" podStartSLOduration=122.647010587 podStartE2EDuration="2m2.647010587s" podCreationTimestamp="2025-12-15 05:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:39:36.645392234 +0000 UTC m=+140.341904172" watchObservedRunningTime="2025-12-15 05:39:36.647010587 +0000 UTC m=+140.343522504" Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.648103 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8phbj" event={"ID":"fe39e570-d08d-473e-a9d8-4aedffae0f04","Type":"ContainerStarted","Data":"8a0bb647e965fd6261292a33205e4101fbfad26c173dd916589d843968846037"} Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.648130 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8phbj" event={"ID":"fe39e570-d08d-473e-a9d8-4aedffae0f04","Type":"ContainerStarted","Data":"93e001ba59b472ccad6821a1f273f042d7e114cd46703732e5e10457d2188f8e"} Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.672152 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-mvljn" Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.682873 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mc527" Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.695172 4747 patch_prober.go:28] interesting pod/console-operator-58897d9998-75dh6 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 15 05:39:36 crc kubenswrapper[4747]: [+]log ok Dec 15 05:39:36 crc kubenswrapper[4747]: [-]poststarthook/max-in-flight-filter failed: reason withheld Dec 15 05:39:36 crc kubenswrapper[4747]: [-]poststarthook/storage-object-count-tracker-hook failed: reason withheld Dec 15 05:39:36 crc kubenswrapper[4747]: [+]shutdown ok Dec 15 05:39:36 crc kubenswrapper[4747]: readyz check failed Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.695232 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-75dh6" podUID="feafc60a-2dff-433e-ad58-01dcc0f23974" containerName="console-operator" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.697602 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gr6p7" podStartSLOduration=122.697584462 podStartE2EDuration="2m2.697584462s" podCreationTimestamp="2025-12-15 05:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:39:36.695769159 +0000 UTC m=+140.392281066" watchObservedRunningTime="2025-12-15 05:39:36.697584462 +0000 UTC m=+140.394096380" Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.725252 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t44jd" Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.741074 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:36 crc kubenswrapper[4747]: E1215 05:39:36.746774 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-15 05:39:37.246759918 +0000 UTC m=+140.943271835 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lzg4l" (UID: "db7a7a97-4354-4b54-afbc-e47fb8751316") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.763024 4747 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-15T05:39:36.399572906Z","Handler":null,"Name":""} Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.767564 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-czrvw" podStartSLOduration=122.767551227 podStartE2EDuration="2m2.767551227s" podCreationTimestamp="2025-12-15 05:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:39:36.731745979 +0000 UTC m=+140.428257896" watchObservedRunningTime="2025-12-15 05:39:36.767551227 +0000 UTC m=+140.464063144" Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.768379 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zbfg" podStartSLOduration=122.768369996 podStartE2EDuration="2m2.768369996s" podCreationTimestamp="2025-12-15 05:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:39:36.766800226 +0000 UTC m=+140.463312142" watchObservedRunningTime="2025-12-15 05:39:36.768369996 +0000 UTC m=+140.464881914" Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.811002 4747 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.811036 4747 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.834971 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-pkbhr" podStartSLOduration=122.834951858 podStartE2EDuration="2m2.834951858s" podCreationTimestamp="2025-12-15 05:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:39:36.832091741 +0000 UTC m=+140.528603658" watchObservedRunningTime="2025-12-15 05:39:36.834951858 +0000 UTC m=+140.531463775" Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.842573 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.847046 4747 patch_prober.go:28] interesting pod/router-default-5444994796-gvbxq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 15 05:39:36 crc kubenswrapper[4747]: [-]has-synced failed: reason withheld Dec 15 05:39:36 crc kubenswrapper[4747]: [+]process-running ok Dec 15 05:39:36 crc kubenswrapper[4747]: healthz check failed Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.847103 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gvbxq" podUID="f9755e7f-72e0-4b8a-94c2-6702dec42d0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.894495 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.919477 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-k72td" podStartSLOduration=7.919462568 podStartE2EDuration="7.919462568s" podCreationTimestamp="2025-12-15 05:39:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:39:36.861202102 +0000 UTC m=+140.557714009" watchObservedRunningTime="2025-12-15 05:39:36.919462568 +0000 UTC m=+140.615974485" Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.944833 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.959511 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jnf5v" podStartSLOduration=122.95949792 podStartE2EDuration="2m2.95949792s" podCreationTimestamp="2025-12-15 05:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:39:36.958461491 +0000 UTC m=+140.654973407" watchObservedRunningTime="2025-12-15 05:39:36.95949792 +0000 UTC m=+140.656009836" Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.959827 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q24qk" podStartSLOduration=122.959821858 podStartE2EDuration="2m2.959821858s" podCreationTimestamp="2025-12-15 05:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:39:36.93646241 +0000 UTC m=+140.632974327" watchObservedRunningTime="2025-12-15 05:39:36.959821858 +0000 UTC m=+140.656333775" Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.963876 4747 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.963937 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.982232 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-ml4rr" podStartSLOduration=122.98221514 podStartE2EDuration="2m2.98221514s" podCreationTimestamp="2025-12-15 05:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:39:36.980264333 +0000 UTC m=+140.676776250" watchObservedRunningTime="2025-12-15 05:39:36.98221514 +0000 UTC m=+140.678727057" Dec 15 05:39:36 crc kubenswrapper[4747]: I1215 05:39:36.998509 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-jx62d" podStartSLOduration=122.998495529 podStartE2EDuration="2m2.998495529s" podCreationTimestamp="2025-12-15 05:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:39:36.998226924 +0000 UTC m=+140.694738842" watchObservedRunningTime="2025-12-15 05:39:36.998495529 +0000 UTC m=+140.695007447" Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.071934 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zbfg" Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.071981 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zbfg" Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.153702 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lzg4l\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.216660 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.397038 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-f66lm"] Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.403954 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f66lm" Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.418891 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.421355 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f66lm"] Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.451312 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zbfg" Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.560425 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6zrdj"] Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.561276 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6zrdj" Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.566561 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.571419 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbrtg\" (UniqueName: \"kubernetes.io/projected/9a524d92-a1c1-4494-b487-ba0df0e6a1ec-kube-api-access-lbrtg\") pod \"certified-operators-f66lm\" (UID: \"9a524d92-a1c1-4494-b487-ba0df0e6a1ec\") " pod="openshift-marketplace/certified-operators-f66lm" Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.571470 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a524d92-a1c1-4494-b487-ba0df0e6a1ec-catalog-content\") pod \"certified-operators-f66lm\" (UID: \"9a524d92-a1c1-4494-b487-ba0df0e6a1ec\") " pod="openshift-marketplace/certified-operators-f66lm" Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.571579 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a524d92-a1c1-4494-b487-ba0df0e6a1ec-utilities\") pod \"certified-operators-f66lm\" (UID: \"9a524d92-a1c1-4494-b487-ba0df0e6a1ec\") " pod="openshift-marketplace/certified-operators-f66lm" Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.586284 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6zrdj"] Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.656914 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lzg4l"] Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.657589 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8phbj" event={"ID":"fe39e570-d08d-473e-a9d8-4aedffae0f04","Type":"ContainerStarted","Data":"3ff7dc6065c517997621b92e278fe82edcbc6ca7538f66b68df5b5d7d6b493de"} Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.659770 4747 generic.go:334] "Generic (PLEG): container finished" podID="ebb703f6-cf81-4ee4-9ec7-d58fa71e7f62" containerID="39ca4119249466f267e77a7b4e3a28afc05d59998b059bf159bd96dbaeb55362" exitCode=0 Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.659987 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29429610-bgnz8" event={"ID":"ebb703f6-cf81-4ee4-9ec7-d58fa71e7f62","Type":"ContainerDied","Data":"39ca4119249466f267e77a7b4e3a28afc05d59998b059bf159bd96dbaeb55362"} Dec 15 05:39:37 crc kubenswrapper[4747]: W1215 05:39:37.661079 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb7a7a97_4354_4b54_afbc_e47fb8751316.slice/crio-d73ac43fa234248b2fcbc1bce861d8ca58ace7d5437852eec4e192990d8f7b90 WatchSource:0}: Error finding container d73ac43fa234248b2fcbc1bce861d8ca58ace7d5437852eec4e192990d8f7b90: Status 404 returned error can't find the container with id d73ac43fa234248b2fcbc1bce861d8ca58ace7d5437852eec4e192990d8f7b90 Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.676346 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6zbfg" Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.677158 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a524d92-a1c1-4494-b487-ba0df0e6a1ec-utilities\") pod \"certified-operators-f66lm\" (UID: \"9a524d92-a1c1-4494-b487-ba0df0e6a1ec\") " pod="openshift-marketplace/certified-operators-f66lm" Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.677203 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbrtg\" (UniqueName: \"kubernetes.io/projected/9a524d92-a1c1-4494-b487-ba0df0e6a1ec-kube-api-access-lbrtg\") pod \"certified-operators-f66lm\" (UID: \"9a524d92-a1c1-4494-b487-ba0df0e6a1ec\") " pod="openshift-marketplace/certified-operators-f66lm" Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.677227 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a524d92-a1c1-4494-b487-ba0df0e6a1ec-catalog-content\") pod \"certified-operators-f66lm\" (UID: \"9a524d92-a1c1-4494-b487-ba0df0e6a1ec\") " pod="openshift-marketplace/certified-operators-f66lm" Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.677269 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww2s9\" (UniqueName: \"kubernetes.io/projected/68eca474-5187-41ca-b67f-cb316a4ab410-kube-api-access-ww2s9\") pod \"community-operators-6zrdj\" (UID: \"68eca474-5187-41ca-b67f-cb316a4ab410\") " pod="openshift-marketplace/community-operators-6zrdj" Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.677328 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68eca474-5187-41ca-b67f-cb316a4ab410-utilities\") pod \"community-operators-6zrdj\" (UID: \"68eca474-5187-41ca-b67f-cb316a4ab410\") " pod="openshift-marketplace/community-operators-6zrdj" Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.677374 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68eca474-5187-41ca-b67f-cb316a4ab410-catalog-content\") pod \"community-operators-6zrdj\" (UID: \"68eca474-5187-41ca-b67f-cb316a4ab410\") " pod="openshift-marketplace/community-operators-6zrdj" Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.679290 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a524d92-a1c1-4494-b487-ba0df0e6a1ec-utilities\") pod \"certified-operators-f66lm\" (UID: \"9a524d92-a1c1-4494-b487-ba0df0e6a1ec\") " pod="openshift-marketplace/certified-operators-f66lm" Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.679738 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a524d92-a1c1-4494-b487-ba0df0e6a1ec-catalog-content\") pod \"certified-operators-f66lm\" (UID: \"9a524d92-a1c1-4494-b487-ba0df0e6a1ec\") " pod="openshift-marketplace/certified-operators-f66lm" Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.685784 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-8phbj" podStartSLOduration=8.685767049 podStartE2EDuration="8.685767049s" podCreationTimestamp="2025-12-15 05:39:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:39:37.680078413 +0000 UTC m=+141.376590330" watchObservedRunningTime="2025-12-15 05:39:37.685767049 +0000 UTC m=+141.382278966" Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.704511 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbrtg\" (UniqueName: \"kubernetes.io/projected/9a524d92-a1c1-4494-b487-ba0df0e6a1ec-kube-api-access-lbrtg\") pod \"certified-operators-f66lm\" (UID: \"9a524d92-a1c1-4494-b487-ba0df0e6a1ec\") " pod="openshift-marketplace/certified-operators-f66lm" Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.753186 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f66lm" Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.769821 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lwqpd"] Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.771182 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lwqpd" Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.779185 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww2s9\" (UniqueName: \"kubernetes.io/projected/68eca474-5187-41ca-b67f-cb316a4ab410-kube-api-access-ww2s9\") pod \"community-operators-6zrdj\" (UID: \"68eca474-5187-41ca-b67f-cb316a4ab410\") " pod="openshift-marketplace/community-operators-6zrdj" Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.779487 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68eca474-5187-41ca-b67f-cb316a4ab410-utilities\") pod \"community-operators-6zrdj\" (UID: \"68eca474-5187-41ca-b67f-cb316a4ab410\") " pod="openshift-marketplace/community-operators-6zrdj" Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.779699 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68eca474-5187-41ca-b67f-cb316a4ab410-catalog-content\") pod \"community-operators-6zrdj\" (UID: \"68eca474-5187-41ca-b67f-cb316a4ab410\") " pod="openshift-marketplace/community-operators-6zrdj" Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.783335 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68eca474-5187-41ca-b67f-cb316a4ab410-catalog-content\") pod \"community-operators-6zrdj\" (UID: \"68eca474-5187-41ca-b67f-cb316a4ab410\") " pod="openshift-marketplace/community-operators-6zrdj" Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.784190 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68eca474-5187-41ca-b67f-cb316a4ab410-utilities\") pod \"community-operators-6zrdj\" (UID: \"68eca474-5187-41ca-b67f-cb316a4ab410\") " pod="openshift-marketplace/community-operators-6zrdj" Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.793504 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lwqpd"] Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.822777 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww2s9\" (UniqueName: \"kubernetes.io/projected/68eca474-5187-41ca-b67f-cb316a4ab410-kube-api-access-ww2s9\") pod \"community-operators-6zrdj\" (UID: \"68eca474-5187-41ca-b67f-cb316a4ab410\") " pod="openshift-marketplace/community-operators-6zrdj" Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.845972 4747 patch_prober.go:28] interesting pod/router-default-5444994796-gvbxq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 15 05:39:37 crc kubenswrapper[4747]: [-]has-synced failed: reason withheld Dec 15 05:39:37 crc kubenswrapper[4747]: [+]process-running ok Dec 15 05:39:37 crc kubenswrapper[4747]: healthz check failed Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.846230 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gvbxq" podUID="f9755e7f-72e0-4b8a-94c2-6702dec42d0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.884317 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6zrdj" Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.895503 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm5zz\" (UniqueName: \"kubernetes.io/projected/3a8d5b87-e7b5-491f-aee8-98aa02ba9a14-kube-api-access-qm5zz\") pod \"certified-operators-lwqpd\" (UID: \"3a8d5b87-e7b5-491f-aee8-98aa02ba9a14\") " pod="openshift-marketplace/certified-operators-lwqpd" Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.895604 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a8d5b87-e7b5-491f-aee8-98aa02ba9a14-catalog-content\") pod \"certified-operators-lwqpd\" (UID: \"3a8d5b87-e7b5-491f-aee8-98aa02ba9a14\") " pod="openshift-marketplace/certified-operators-lwqpd" Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.895638 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a8d5b87-e7b5-491f-aee8-98aa02ba9a14-utilities\") pod \"certified-operators-lwqpd\" (UID: \"3a8d5b87-e7b5-491f-aee8-98aa02ba9a14\") " pod="openshift-marketplace/certified-operators-lwqpd" Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.955409 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jwzfq"] Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.956425 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jwzfq" Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.971651 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jwzfq"] Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.993045 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f66lm"] Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.997213 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm5zz\" (UniqueName: \"kubernetes.io/projected/3a8d5b87-e7b5-491f-aee8-98aa02ba9a14-kube-api-access-qm5zz\") pod \"certified-operators-lwqpd\" (UID: \"3a8d5b87-e7b5-491f-aee8-98aa02ba9a14\") " pod="openshift-marketplace/certified-operators-lwqpd" Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.997551 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a8d5b87-e7b5-491f-aee8-98aa02ba9a14-catalog-content\") pod \"certified-operators-lwqpd\" (UID: \"3a8d5b87-e7b5-491f-aee8-98aa02ba9a14\") " pod="openshift-marketplace/certified-operators-lwqpd" Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.997585 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a8d5b87-e7b5-491f-aee8-98aa02ba9a14-utilities\") pod \"certified-operators-lwqpd\" (UID: \"3a8d5b87-e7b5-491f-aee8-98aa02ba9a14\") " pod="openshift-marketplace/certified-operators-lwqpd" Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.998005 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a8d5b87-e7b5-491f-aee8-98aa02ba9a14-utilities\") pod \"certified-operators-lwqpd\" (UID: \"3a8d5b87-e7b5-491f-aee8-98aa02ba9a14\") " pod="openshift-marketplace/certified-operators-lwqpd" Dec 15 05:39:37 crc kubenswrapper[4747]: I1215 05:39:37.998340 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a8d5b87-e7b5-491f-aee8-98aa02ba9a14-catalog-content\") pod \"certified-operators-lwqpd\" (UID: \"3a8d5b87-e7b5-491f-aee8-98aa02ba9a14\") " pod="openshift-marketplace/certified-operators-lwqpd" Dec 15 05:39:38 crc kubenswrapper[4747]: W1215 05:39:38.000259 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a524d92_a1c1_4494_b487_ba0df0e6a1ec.slice/crio-06d9d8b64d56d2946e2721376c9d7e5879e7467d9e357720a405854988a02675 WatchSource:0}: Error finding container 06d9d8b64d56d2946e2721376c9d7e5879e7467d9e357720a405854988a02675: Status 404 returned error can't find the container with id 06d9d8b64d56d2946e2721376c9d7e5879e7467d9e357720a405854988a02675 Dec 15 05:39:38 crc kubenswrapper[4747]: I1215 05:39:38.034577 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm5zz\" (UniqueName: \"kubernetes.io/projected/3a8d5b87-e7b5-491f-aee8-98aa02ba9a14-kube-api-access-qm5zz\") pod \"certified-operators-lwqpd\" (UID: \"3a8d5b87-e7b5-491f-aee8-98aa02ba9a14\") " pod="openshift-marketplace/certified-operators-lwqpd" Dec 15 05:39:38 crc kubenswrapper[4747]: I1215 05:39:38.099587 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95e7ce44-2981-4ea8-91a9-a9e897bdc80b-catalog-content\") pod \"community-operators-jwzfq\" (UID: \"95e7ce44-2981-4ea8-91a9-a9e897bdc80b\") " pod="openshift-marketplace/community-operators-jwzfq" Dec 15 05:39:38 crc kubenswrapper[4747]: I1215 05:39:38.099653 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9nwv\" (UniqueName: \"kubernetes.io/projected/95e7ce44-2981-4ea8-91a9-a9e897bdc80b-kube-api-access-z9nwv\") pod \"community-operators-jwzfq\" (UID: \"95e7ce44-2981-4ea8-91a9-a9e897bdc80b\") " pod="openshift-marketplace/community-operators-jwzfq" Dec 15 05:39:38 crc kubenswrapper[4747]: I1215 05:39:38.099967 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95e7ce44-2981-4ea8-91a9-a9e897bdc80b-utilities\") pod \"community-operators-jwzfq\" (UID: \"95e7ce44-2981-4ea8-91a9-a9e897bdc80b\") " pod="openshift-marketplace/community-operators-jwzfq" Dec 15 05:39:38 crc kubenswrapper[4747]: I1215 05:39:38.105912 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6zrdj"] Dec 15 05:39:38 crc kubenswrapper[4747]: W1215 05:39:38.111047 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68eca474_5187_41ca_b67f_cb316a4ab410.slice/crio-bbe82d7b71b93438d6c3468e63fe74caf4d71e62a08283b21fc7daca07ef72a6 WatchSource:0}: Error finding container bbe82d7b71b93438d6c3468e63fe74caf4d71e62a08283b21fc7daca07ef72a6: Status 404 returned error can't find the container with id bbe82d7b71b93438d6c3468e63fe74caf4d71e62a08283b21fc7daca07ef72a6 Dec 15 05:39:38 crc kubenswrapper[4747]: I1215 05:39:38.131496 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lwqpd" Dec 15 05:39:38 crc kubenswrapper[4747]: I1215 05:39:38.142354 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gr6p7" Dec 15 05:39:38 crc kubenswrapper[4747]: I1215 05:39:38.202327 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95e7ce44-2981-4ea8-91a9-a9e897bdc80b-catalog-content\") pod \"community-operators-jwzfq\" (UID: \"95e7ce44-2981-4ea8-91a9-a9e897bdc80b\") " pod="openshift-marketplace/community-operators-jwzfq" Dec 15 05:39:38 crc kubenswrapper[4747]: I1215 05:39:38.202590 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9nwv\" (UniqueName: \"kubernetes.io/projected/95e7ce44-2981-4ea8-91a9-a9e897bdc80b-kube-api-access-z9nwv\") pod \"community-operators-jwzfq\" (UID: \"95e7ce44-2981-4ea8-91a9-a9e897bdc80b\") " pod="openshift-marketplace/community-operators-jwzfq" Dec 15 05:39:38 crc kubenswrapper[4747]: I1215 05:39:38.202698 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95e7ce44-2981-4ea8-91a9-a9e897bdc80b-utilities\") pod \"community-operators-jwzfq\" (UID: \"95e7ce44-2981-4ea8-91a9-a9e897bdc80b\") " pod="openshift-marketplace/community-operators-jwzfq" Dec 15 05:39:38 crc kubenswrapper[4747]: I1215 05:39:38.203074 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95e7ce44-2981-4ea8-91a9-a9e897bdc80b-utilities\") pod \"community-operators-jwzfq\" (UID: \"95e7ce44-2981-4ea8-91a9-a9e897bdc80b\") " pod="openshift-marketplace/community-operators-jwzfq" Dec 15 05:39:38 crc kubenswrapper[4747]: I1215 05:39:38.203118 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95e7ce44-2981-4ea8-91a9-a9e897bdc80b-catalog-content\") pod \"community-operators-jwzfq\" (UID: \"95e7ce44-2981-4ea8-91a9-a9e897bdc80b\") " pod="openshift-marketplace/community-operators-jwzfq" Dec 15 05:39:38 crc kubenswrapper[4747]: I1215 05:39:38.223418 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9nwv\" (UniqueName: \"kubernetes.io/projected/95e7ce44-2981-4ea8-91a9-a9e897bdc80b-kube-api-access-z9nwv\") pod \"community-operators-jwzfq\" (UID: \"95e7ce44-2981-4ea8-91a9-a9e897bdc80b\") " pod="openshift-marketplace/community-operators-jwzfq" Dec 15 05:39:38 crc kubenswrapper[4747]: I1215 05:39:38.277718 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jwzfq" Dec 15 05:39:38 crc kubenswrapper[4747]: I1215 05:39:38.327315 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lwqpd"] Dec 15 05:39:38 crc kubenswrapper[4747]: I1215 05:39:38.467325 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jwzfq"] Dec 15 05:39:38 crc kubenswrapper[4747]: W1215 05:39:38.527421 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95e7ce44_2981_4ea8_91a9_a9e897bdc80b.slice/crio-d9cdb100dec2812d3600393759d3058a20192e8ac0aa67bbba82dd47514dce3c WatchSource:0}: Error finding container d9cdb100dec2812d3600393759d3058a20192e8ac0aa67bbba82dd47514dce3c: Status 404 returned error can't find the container with id d9cdb100dec2812d3600393759d3058a20192e8ac0aa67bbba82dd47514dce3c Dec 15 05:39:38 crc kubenswrapper[4747]: I1215 05:39:38.642818 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 15 05:39:38 crc kubenswrapper[4747]: I1215 05:39:38.668623 4747 generic.go:334] "Generic (PLEG): container finished" podID="3a8d5b87-e7b5-491f-aee8-98aa02ba9a14" containerID="828e9725304b4465b0f9e2b6c3649417bfe1bc9124bcb8a05b8c2b423c41e51d" exitCode=0 Dec 15 05:39:38 crc kubenswrapper[4747]: I1215 05:39:38.668756 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lwqpd" event={"ID":"3a8d5b87-e7b5-491f-aee8-98aa02ba9a14","Type":"ContainerDied","Data":"828e9725304b4465b0f9e2b6c3649417bfe1bc9124bcb8a05b8c2b423c41e51d"} Dec 15 05:39:38 crc kubenswrapper[4747]: I1215 05:39:38.668793 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lwqpd" event={"ID":"3a8d5b87-e7b5-491f-aee8-98aa02ba9a14","Type":"ContainerStarted","Data":"b8a2d4290a3a26d50d86ad200533ee7973e7a284edcce6854fae4ac5cf347c1d"} Dec 15 05:39:38 crc kubenswrapper[4747]: I1215 05:39:38.671904 4747 generic.go:334] "Generic (PLEG): container finished" podID="68eca474-5187-41ca-b67f-cb316a4ab410" containerID="02fbc7db47723434454501ab28c28875ec5d5edef3ae74efd55e41e23d579bdb" exitCode=0 Dec 15 05:39:38 crc kubenswrapper[4747]: I1215 05:39:38.672190 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zrdj" event={"ID":"68eca474-5187-41ca-b67f-cb316a4ab410","Type":"ContainerDied","Data":"02fbc7db47723434454501ab28c28875ec5d5edef3ae74efd55e41e23d579bdb"} Dec 15 05:39:38 crc kubenswrapper[4747]: I1215 05:39:38.672234 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zrdj" event={"ID":"68eca474-5187-41ca-b67f-cb316a4ab410","Type":"ContainerStarted","Data":"bbe82d7b71b93438d6c3468e63fe74caf4d71e62a08283b21fc7daca07ef72a6"} Dec 15 05:39:38 crc kubenswrapper[4747]: I1215 05:39:38.673151 4747 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 15 05:39:38 crc kubenswrapper[4747]: I1215 05:39:38.676388 4747 generic.go:334] "Generic (PLEG): container finished" podID="95e7ce44-2981-4ea8-91a9-a9e897bdc80b" containerID="21a5db804ab466002fc353f26e06d1dd2b6bca6aae267d5e87ab4dc3095a05ee" exitCode=0 Dec 15 05:39:38 crc kubenswrapper[4747]: I1215 05:39:38.676515 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jwzfq" event={"ID":"95e7ce44-2981-4ea8-91a9-a9e897bdc80b","Type":"ContainerDied","Data":"21a5db804ab466002fc353f26e06d1dd2b6bca6aae267d5e87ab4dc3095a05ee"} Dec 15 05:39:38 crc kubenswrapper[4747]: I1215 05:39:38.677347 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jwzfq" event={"ID":"95e7ce44-2981-4ea8-91a9-a9e897bdc80b","Type":"ContainerStarted","Data":"d9cdb100dec2812d3600393759d3058a20192e8ac0aa67bbba82dd47514dce3c"} Dec 15 05:39:38 crc kubenswrapper[4747]: I1215 05:39:38.687121 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" event={"ID":"db7a7a97-4354-4b54-afbc-e47fb8751316","Type":"ContainerStarted","Data":"ef9f48e4c4a24b96cc01c36f2c265127ef3bdfe7596733449be856abe5564602"} Dec 15 05:39:38 crc kubenswrapper[4747]: I1215 05:39:38.687183 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" event={"ID":"db7a7a97-4354-4b54-afbc-e47fb8751316","Type":"ContainerStarted","Data":"d73ac43fa234248b2fcbc1bce861d8ca58ace7d5437852eec4e192990d8f7b90"} Dec 15 05:39:38 crc kubenswrapper[4747]: I1215 05:39:38.687302 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:38 crc kubenswrapper[4747]: I1215 05:39:38.691463 4747 generic.go:334] "Generic (PLEG): container finished" podID="9a524d92-a1c1-4494-b487-ba0df0e6a1ec" containerID="6c41ed4198b50c7d1a59c743cc2e5a5b924568dbd7e1533487bdd81f2b69b307" exitCode=0 Dec 15 05:39:38 crc kubenswrapper[4747]: I1215 05:39:38.691652 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f66lm" event={"ID":"9a524d92-a1c1-4494-b487-ba0df0e6a1ec","Type":"ContainerDied","Data":"6c41ed4198b50c7d1a59c743cc2e5a5b924568dbd7e1533487bdd81f2b69b307"} Dec 15 05:39:38 crc kubenswrapper[4747]: I1215 05:39:38.691711 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f66lm" event={"ID":"9a524d92-a1c1-4494-b487-ba0df0e6a1ec","Type":"ContainerStarted","Data":"06d9d8b64d56d2946e2721376c9d7e5879e7467d9e357720a405854988a02675"} Dec 15 05:39:38 crc kubenswrapper[4747]: I1215 05:39:38.737086 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" podStartSLOduration=124.73707022 podStartE2EDuration="2m4.73707022s" podCreationTimestamp="2025-12-15 05:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:39:38.73624066 +0000 UTC m=+142.432752567" watchObservedRunningTime="2025-12-15 05:39:38.73707022 +0000 UTC m=+142.433582137" Dec 15 05:39:38 crc kubenswrapper[4747]: I1215 05:39:38.844551 4747 patch_prober.go:28] interesting pod/router-default-5444994796-gvbxq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 15 05:39:38 crc kubenswrapper[4747]: [-]has-synced failed: reason withheld Dec 15 05:39:38 crc kubenswrapper[4747]: [+]process-running ok Dec 15 05:39:38 crc kubenswrapper[4747]: healthz check failed Dec 15 05:39:38 crc kubenswrapper[4747]: I1215 05:39:38.844642 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gvbxq" podUID="f9755e7f-72e0-4b8a-94c2-6702dec42d0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 15 05:39:38 crc kubenswrapper[4747]: I1215 05:39:38.894337 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29429610-bgnz8" Dec 15 05:39:39 crc kubenswrapper[4747]: I1215 05:39:39.014319 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ebb703f6-cf81-4ee4-9ec7-d58fa71e7f62-config-volume\") pod \"ebb703f6-cf81-4ee4-9ec7-d58fa71e7f62\" (UID: \"ebb703f6-cf81-4ee4-9ec7-d58fa71e7f62\") " Dec 15 05:39:39 crc kubenswrapper[4747]: I1215 05:39:39.014532 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5skg\" (UniqueName: \"kubernetes.io/projected/ebb703f6-cf81-4ee4-9ec7-d58fa71e7f62-kube-api-access-m5skg\") pod \"ebb703f6-cf81-4ee4-9ec7-d58fa71e7f62\" (UID: \"ebb703f6-cf81-4ee4-9ec7-d58fa71e7f62\") " Dec 15 05:39:39 crc kubenswrapper[4747]: I1215 05:39:39.014567 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ebb703f6-cf81-4ee4-9ec7-d58fa71e7f62-secret-volume\") pod \"ebb703f6-cf81-4ee4-9ec7-d58fa71e7f62\" (UID: \"ebb703f6-cf81-4ee4-9ec7-d58fa71e7f62\") " Dec 15 05:39:39 crc kubenswrapper[4747]: I1215 05:39:39.015363 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebb703f6-cf81-4ee4-9ec7-d58fa71e7f62-config-volume" (OuterVolumeSpecName: "config-volume") pod "ebb703f6-cf81-4ee4-9ec7-d58fa71e7f62" (UID: "ebb703f6-cf81-4ee4-9ec7-d58fa71e7f62"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:39:39 crc kubenswrapper[4747]: I1215 05:39:39.020225 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebb703f6-cf81-4ee4-9ec7-d58fa71e7f62-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ebb703f6-cf81-4ee4-9ec7-d58fa71e7f62" (UID: "ebb703f6-cf81-4ee4-9ec7-d58fa71e7f62"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:39:39 crc kubenswrapper[4747]: I1215 05:39:39.020662 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebb703f6-cf81-4ee4-9ec7-d58fa71e7f62-kube-api-access-m5skg" (OuterVolumeSpecName: "kube-api-access-m5skg") pod "ebb703f6-cf81-4ee4-9ec7-d58fa71e7f62" (UID: "ebb703f6-cf81-4ee4-9ec7-d58fa71e7f62"). InnerVolumeSpecName "kube-api-access-m5skg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:39:39 crc kubenswrapper[4747]: I1215 05:39:39.116906 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5skg\" (UniqueName: \"kubernetes.io/projected/ebb703f6-cf81-4ee4-9ec7-d58fa71e7f62-kube-api-access-m5skg\") on node \"crc\" DevicePath \"\"" Dec 15 05:39:39 crc kubenswrapper[4747]: I1215 05:39:39.116957 4747 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ebb703f6-cf81-4ee4-9ec7-d58fa71e7f62-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 15 05:39:39 crc kubenswrapper[4747]: I1215 05:39:39.116969 4747 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ebb703f6-cf81-4ee4-9ec7-d58fa71e7f62-config-volume\") on node \"crc\" DevicePath \"\"" Dec 15 05:39:39 crc kubenswrapper[4747]: I1215 05:39:39.356441 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gxjhc"] Dec 15 05:39:39 crc kubenswrapper[4747]: E1215 05:39:39.356709 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebb703f6-cf81-4ee4-9ec7-d58fa71e7f62" containerName="collect-profiles" Dec 15 05:39:39 crc kubenswrapper[4747]: I1215 05:39:39.356729 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebb703f6-cf81-4ee4-9ec7-d58fa71e7f62" containerName="collect-profiles" Dec 15 05:39:39 crc kubenswrapper[4747]: I1215 05:39:39.356845 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebb703f6-cf81-4ee4-9ec7-d58fa71e7f62" containerName="collect-profiles" Dec 15 05:39:39 crc kubenswrapper[4747]: I1215 05:39:39.357687 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gxjhc" Dec 15 05:39:39 crc kubenswrapper[4747]: I1215 05:39:39.359689 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 15 05:39:39 crc kubenswrapper[4747]: I1215 05:39:39.366114 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gxjhc"] Dec 15 05:39:39 crc kubenswrapper[4747]: I1215 05:39:39.523053 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/496c9f6b-020f-4ba2-9031-4dfee47f18ab-utilities\") pod \"redhat-marketplace-gxjhc\" (UID: \"496c9f6b-020f-4ba2-9031-4dfee47f18ab\") " pod="openshift-marketplace/redhat-marketplace-gxjhc" Dec 15 05:39:39 crc kubenswrapper[4747]: I1215 05:39:39.523155 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/496c9f6b-020f-4ba2-9031-4dfee47f18ab-catalog-content\") pod \"redhat-marketplace-gxjhc\" (UID: \"496c9f6b-020f-4ba2-9031-4dfee47f18ab\") " pod="openshift-marketplace/redhat-marketplace-gxjhc" Dec 15 05:39:39 crc kubenswrapper[4747]: I1215 05:39:39.523232 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf744\" (UniqueName: \"kubernetes.io/projected/496c9f6b-020f-4ba2-9031-4dfee47f18ab-kube-api-access-bf744\") pod \"redhat-marketplace-gxjhc\" (UID: \"496c9f6b-020f-4ba2-9031-4dfee47f18ab\") " pod="openshift-marketplace/redhat-marketplace-gxjhc" Dec 15 05:39:39 crc kubenswrapper[4747]: I1215 05:39:39.549868 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 15 05:39:39 crc kubenswrapper[4747]: I1215 05:39:39.550810 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 15 05:39:39 crc kubenswrapper[4747]: I1215 05:39:39.552559 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 15 05:39:39 crc kubenswrapper[4747]: I1215 05:39:39.552822 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 15 05:39:39 crc kubenswrapper[4747]: I1215 05:39:39.559883 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 15 05:39:39 crc kubenswrapper[4747]: I1215 05:39:39.624891 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/496c9f6b-020f-4ba2-9031-4dfee47f18ab-catalog-content\") pod \"redhat-marketplace-gxjhc\" (UID: \"496c9f6b-020f-4ba2-9031-4dfee47f18ab\") " pod="openshift-marketplace/redhat-marketplace-gxjhc" Dec 15 05:39:39 crc kubenswrapper[4747]: I1215 05:39:39.624963 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf744\" (UniqueName: \"kubernetes.io/projected/496c9f6b-020f-4ba2-9031-4dfee47f18ab-kube-api-access-bf744\") pod \"redhat-marketplace-gxjhc\" (UID: \"496c9f6b-020f-4ba2-9031-4dfee47f18ab\") " pod="openshift-marketplace/redhat-marketplace-gxjhc" Dec 15 05:39:39 crc kubenswrapper[4747]: I1215 05:39:39.625077 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/496c9f6b-020f-4ba2-9031-4dfee47f18ab-utilities\") pod \"redhat-marketplace-gxjhc\" (UID: \"496c9f6b-020f-4ba2-9031-4dfee47f18ab\") " pod="openshift-marketplace/redhat-marketplace-gxjhc" Dec 15 05:39:39 crc kubenswrapper[4747]: I1215 05:39:39.625825 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/496c9f6b-020f-4ba2-9031-4dfee47f18ab-utilities\") pod \"redhat-marketplace-gxjhc\" (UID: \"496c9f6b-020f-4ba2-9031-4dfee47f18ab\") " pod="openshift-marketplace/redhat-marketplace-gxjhc" Dec 15 05:39:39 crc kubenswrapper[4747]: I1215 05:39:39.626253 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/496c9f6b-020f-4ba2-9031-4dfee47f18ab-catalog-content\") pod \"redhat-marketplace-gxjhc\" (UID: \"496c9f6b-020f-4ba2-9031-4dfee47f18ab\") " pod="openshift-marketplace/redhat-marketplace-gxjhc" Dec 15 05:39:39 crc kubenswrapper[4747]: I1215 05:39:39.648147 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf744\" (UniqueName: \"kubernetes.io/projected/496c9f6b-020f-4ba2-9031-4dfee47f18ab-kube-api-access-bf744\") pod \"redhat-marketplace-gxjhc\" (UID: \"496c9f6b-020f-4ba2-9031-4dfee47f18ab\") " pod="openshift-marketplace/redhat-marketplace-gxjhc" Dec 15 05:39:39 crc kubenswrapper[4747]: I1215 05:39:39.670773 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gxjhc" Dec 15 05:39:39 crc kubenswrapper[4747]: I1215 05:39:39.701758 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29429610-bgnz8" event={"ID":"ebb703f6-cf81-4ee4-9ec7-d58fa71e7f62","Type":"ContainerDied","Data":"e905748daf0e80829c719015ed7f8aa2280ede98989749dfe040e5bd60f28af5"} Dec 15 05:39:39 crc kubenswrapper[4747]: I1215 05:39:39.701798 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e905748daf0e80829c719015ed7f8aa2280ede98989749dfe040e5bd60f28af5" Dec 15 05:39:39 crc kubenswrapper[4747]: I1215 05:39:39.701895 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29429610-bgnz8" Dec 15 05:39:39 crc kubenswrapper[4747]: I1215 05:39:39.726494 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/166f8882-1da0-434d-9b5f-79a43223e9fb-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"166f8882-1da0-434d-9b5f-79a43223e9fb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 15 05:39:39 crc kubenswrapper[4747]: I1215 05:39:39.726783 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/166f8882-1da0-434d-9b5f-79a43223e9fb-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"166f8882-1da0-434d-9b5f-79a43223e9fb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 15 05:39:39 crc kubenswrapper[4747]: I1215 05:39:39.763285 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-88ln6"] Dec 15 05:39:39 crc kubenswrapper[4747]: I1215 05:39:39.767873 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-88ln6" Dec 15 05:39:39 crc kubenswrapper[4747]: I1215 05:39:39.770089 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-88ln6"] Dec 15 05:39:39 crc kubenswrapper[4747]: I1215 05:39:39.828887 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/166f8882-1da0-434d-9b5f-79a43223e9fb-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"166f8882-1da0-434d-9b5f-79a43223e9fb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 15 05:39:39 crc kubenswrapper[4747]: I1215 05:39:39.829035 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/166f8882-1da0-434d-9b5f-79a43223e9fb-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"166f8882-1da0-434d-9b5f-79a43223e9fb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 15 05:39:39 crc kubenswrapper[4747]: I1215 05:39:39.830406 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/166f8882-1da0-434d-9b5f-79a43223e9fb-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"166f8882-1da0-434d-9b5f-79a43223e9fb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 15 05:39:39 crc kubenswrapper[4747]: I1215 05:39:39.846332 4747 patch_prober.go:28] interesting pod/router-default-5444994796-gvbxq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 15 05:39:39 crc kubenswrapper[4747]: [-]has-synced failed: reason withheld Dec 15 05:39:39 crc kubenswrapper[4747]: [+]process-running ok Dec 15 05:39:39 crc kubenswrapper[4747]: healthz check failed Dec 15 05:39:39 crc kubenswrapper[4747]: I1215 05:39:39.846576 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gvbxq" podUID="f9755e7f-72e0-4b8a-94c2-6702dec42d0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 15 05:39:39 crc kubenswrapper[4747]: I1215 05:39:39.853134 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/166f8882-1da0-434d-9b5f-79a43223e9fb-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"166f8882-1da0-434d-9b5f-79a43223e9fb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 15 05:39:39 crc kubenswrapper[4747]: I1215 05:39:39.869361 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 15 05:39:39 crc kubenswrapper[4747]: I1215 05:39:39.931022 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9d8j\" (UniqueName: \"kubernetes.io/projected/7f997591-b82e-4c3b-85b9-5106a8168eec-kube-api-access-t9d8j\") pod \"redhat-marketplace-88ln6\" (UID: \"7f997591-b82e-4c3b-85b9-5106a8168eec\") " pod="openshift-marketplace/redhat-marketplace-88ln6" Dec 15 05:39:39 crc kubenswrapper[4747]: I1215 05:39:39.931083 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f997591-b82e-4c3b-85b9-5106a8168eec-catalog-content\") pod \"redhat-marketplace-88ln6\" (UID: \"7f997591-b82e-4c3b-85b9-5106a8168eec\") " pod="openshift-marketplace/redhat-marketplace-88ln6" Dec 15 05:39:39 crc kubenswrapper[4747]: I1215 05:39:39.931130 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f997591-b82e-4c3b-85b9-5106a8168eec-utilities\") pod \"redhat-marketplace-88ln6\" (UID: \"7f997591-b82e-4c3b-85b9-5106a8168eec\") " pod="openshift-marketplace/redhat-marketplace-88ln6" Dec 15 05:39:39 crc kubenswrapper[4747]: I1215 05:39:39.952069 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gxjhc"] Dec 15 05:39:40 crc kubenswrapper[4747]: I1215 05:39:40.032690 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9d8j\" (UniqueName: \"kubernetes.io/projected/7f997591-b82e-4c3b-85b9-5106a8168eec-kube-api-access-t9d8j\") pod \"redhat-marketplace-88ln6\" (UID: \"7f997591-b82e-4c3b-85b9-5106a8168eec\") " pod="openshift-marketplace/redhat-marketplace-88ln6" Dec 15 05:39:40 crc kubenswrapper[4747]: I1215 05:39:40.032769 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f997591-b82e-4c3b-85b9-5106a8168eec-catalog-content\") pod \"redhat-marketplace-88ln6\" (UID: \"7f997591-b82e-4c3b-85b9-5106a8168eec\") " pod="openshift-marketplace/redhat-marketplace-88ln6" Dec 15 05:39:40 crc kubenswrapper[4747]: I1215 05:39:40.032832 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f997591-b82e-4c3b-85b9-5106a8168eec-utilities\") pod \"redhat-marketplace-88ln6\" (UID: \"7f997591-b82e-4c3b-85b9-5106a8168eec\") " pod="openshift-marketplace/redhat-marketplace-88ln6" Dec 15 05:39:40 crc kubenswrapper[4747]: I1215 05:39:40.033389 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f997591-b82e-4c3b-85b9-5106a8168eec-utilities\") pod \"redhat-marketplace-88ln6\" (UID: \"7f997591-b82e-4c3b-85b9-5106a8168eec\") " pod="openshift-marketplace/redhat-marketplace-88ln6" Dec 15 05:39:40 crc kubenswrapper[4747]: I1215 05:39:40.033423 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f997591-b82e-4c3b-85b9-5106a8168eec-catalog-content\") pod \"redhat-marketplace-88ln6\" (UID: \"7f997591-b82e-4c3b-85b9-5106a8168eec\") " pod="openshift-marketplace/redhat-marketplace-88ln6" Dec 15 05:39:40 crc kubenswrapper[4747]: I1215 05:39:40.048874 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9d8j\" (UniqueName: \"kubernetes.io/projected/7f997591-b82e-4c3b-85b9-5106a8168eec-kube-api-access-t9d8j\") pod \"redhat-marketplace-88ln6\" (UID: \"7f997591-b82e-4c3b-85b9-5106a8168eec\") " pod="openshift-marketplace/redhat-marketplace-88ln6" Dec 15 05:39:40 crc kubenswrapper[4747]: I1215 05:39:40.080138 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 15 05:39:40 crc kubenswrapper[4747]: I1215 05:39:40.082726 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-88ln6" Dec 15 05:39:40 crc kubenswrapper[4747]: W1215 05:39:40.093540 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod166f8882_1da0_434d_9b5f_79a43223e9fb.slice/crio-a48a85949173981cc735f750ae08de7efc2dc959f08b930c0ddd75bae41520b1 WatchSource:0}: Error finding container a48a85949173981cc735f750ae08de7efc2dc959f08b930c0ddd75bae41520b1: Status 404 returned error can't find the container with id a48a85949173981cc735f750ae08de7efc2dc959f08b930c0ddd75bae41520b1 Dec 15 05:39:40 crc kubenswrapper[4747]: I1215 05:39:40.308876 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-88ln6"] Dec 15 05:39:40 crc kubenswrapper[4747]: W1215 05:39:40.356288 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f997591_b82e_4c3b_85b9_5106a8168eec.slice/crio-4fd8a16e0e18f7b4fcb01c1da294e1409810717f403bbaa3ac7660d68e7e47c8 WatchSource:0}: Error finding container 4fd8a16e0e18f7b4fcb01c1da294e1409810717f403bbaa3ac7660d68e7e47c8: Status 404 returned error can't find the container with id 4fd8a16e0e18f7b4fcb01c1da294e1409810717f403bbaa3ac7660d68e7e47c8 Dec 15 05:39:40 crc kubenswrapper[4747]: I1215 05:39:40.559315 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hhfhz"] Dec 15 05:39:40 crc kubenswrapper[4747]: I1215 05:39:40.560816 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hhfhz" Dec 15 05:39:40 crc kubenswrapper[4747]: I1215 05:39:40.564417 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 15 05:39:40 crc kubenswrapper[4747]: I1215 05:39:40.565651 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hhfhz"] Dec 15 05:39:40 crc kubenswrapper[4747]: I1215 05:39:40.639815 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f228\" (UniqueName: \"kubernetes.io/projected/97c38c52-062a-4f94-9992-f944bb0519ee-kube-api-access-8f228\") pod \"redhat-operators-hhfhz\" (UID: \"97c38c52-062a-4f94-9992-f944bb0519ee\") " pod="openshift-marketplace/redhat-operators-hhfhz" Dec 15 05:39:40 crc kubenswrapper[4747]: I1215 05:39:40.639969 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97c38c52-062a-4f94-9992-f944bb0519ee-utilities\") pod \"redhat-operators-hhfhz\" (UID: \"97c38c52-062a-4f94-9992-f944bb0519ee\") " pod="openshift-marketplace/redhat-operators-hhfhz" Dec 15 05:39:40 crc kubenswrapper[4747]: I1215 05:39:40.640101 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97c38c52-062a-4f94-9992-f944bb0519ee-catalog-content\") pod \"redhat-operators-hhfhz\" (UID: \"97c38c52-062a-4f94-9992-f944bb0519ee\") " pod="openshift-marketplace/redhat-operators-hhfhz" Dec 15 05:39:40 crc kubenswrapper[4747]: I1215 05:39:40.709220 4747 generic.go:334] "Generic (PLEG): container finished" podID="7f997591-b82e-4c3b-85b9-5106a8168eec" containerID="bf3f35cf634b95a3a93dacd5fcbb75f407bcb1e74c4d08d06fffd6adc421372c" exitCode=0 Dec 15 05:39:40 crc kubenswrapper[4747]: I1215 05:39:40.709321 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-88ln6" event={"ID":"7f997591-b82e-4c3b-85b9-5106a8168eec","Type":"ContainerDied","Data":"bf3f35cf634b95a3a93dacd5fcbb75f407bcb1e74c4d08d06fffd6adc421372c"} Dec 15 05:39:40 crc kubenswrapper[4747]: I1215 05:39:40.709368 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-88ln6" event={"ID":"7f997591-b82e-4c3b-85b9-5106a8168eec","Type":"ContainerStarted","Data":"4fd8a16e0e18f7b4fcb01c1da294e1409810717f403bbaa3ac7660d68e7e47c8"} Dec 15 05:39:40 crc kubenswrapper[4747]: I1215 05:39:40.713177 4747 generic.go:334] "Generic (PLEG): container finished" podID="496c9f6b-020f-4ba2-9031-4dfee47f18ab" containerID="713e0c3eaf339ff962703d6c50670684e698409b50abf5a06b118fa89ae1881a" exitCode=0 Dec 15 05:39:40 crc kubenswrapper[4747]: I1215 05:39:40.713240 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxjhc" event={"ID":"496c9f6b-020f-4ba2-9031-4dfee47f18ab","Type":"ContainerDied","Data":"713e0c3eaf339ff962703d6c50670684e698409b50abf5a06b118fa89ae1881a"} Dec 15 05:39:40 crc kubenswrapper[4747]: I1215 05:39:40.713267 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxjhc" event={"ID":"496c9f6b-020f-4ba2-9031-4dfee47f18ab","Type":"ContainerStarted","Data":"444f70b304b0c10e116c30637516b2577650b85901bcfcfbe4cde53b6dd2b91f"} Dec 15 05:39:40 crc kubenswrapper[4747]: I1215 05:39:40.716325 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"166f8882-1da0-434d-9b5f-79a43223e9fb","Type":"ContainerStarted","Data":"47f5d50889c0cccb01aeea61be173c530bcc04a75f299198dc68bde21ee1a992"} Dec 15 05:39:40 crc kubenswrapper[4747]: I1215 05:39:40.716368 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"166f8882-1da0-434d-9b5f-79a43223e9fb","Type":"ContainerStarted","Data":"a48a85949173981cc735f750ae08de7efc2dc959f08b930c0ddd75bae41520b1"} Dec 15 05:39:40 crc kubenswrapper[4747]: I1215 05:39:40.735396 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=1.735385472 podStartE2EDuration="1.735385472s" podCreationTimestamp="2025-12-15 05:39:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:39:40.732948841 +0000 UTC m=+144.429460758" watchObservedRunningTime="2025-12-15 05:39:40.735385472 +0000 UTC m=+144.431897389" Dec 15 05:39:40 crc kubenswrapper[4747]: I1215 05:39:40.746109 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f228\" (UniqueName: \"kubernetes.io/projected/97c38c52-062a-4f94-9992-f944bb0519ee-kube-api-access-8f228\") pod \"redhat-operators-hhfhz\" (UID: \"97c38c52-062a-4f94-9992-f944bb0519ee\") " pod="openshift-marketplace/redhat-operators-hhfhz" Dec 15 05:39:40 crc kubenswrapper[4747]: I1215 05:39:40.746211 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97c38c52-062a-4f94-9992-f944bb0519ee-utilities\") pod \"redhat-operators-hhfhz\" (UID: \"97c38c52-062a-4f94-9992-f944bb0519ee\") " pod="openshift-marketplace/redhat-operators-hhfhz" Dec 15 05:39:40 crc kubenswrapper[4747]: I1215 05:39:40.746308 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97c38c52-062a-4f94-9992-f944bb0519ee-catalog-content\") pod \"redhat-operators-hhfhz\" (UID: \"97c38c52-062a-4f94-9992-f944bb0519ee\") " pod="openshift-marketplace/redhat-operators-hhfhz" Dec 15 05:39:40 crc kubenswrapper[4747]: I1215 05:39:40.746743 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97c38c52-062a-4f94-9992-f944bb0519ee-catalog-content\") pod \"redhat-operators-hhfhz\" (UID: \"97c38c52-062a-4f94-9992-f944bb0519ee\") " pod="openshift-marketplace/redhat-operators-hhfhz" Dec 15 05:39:40 crc kubenswrapper[4747]: I1215 05:39:40.747199 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97c38c52-062a-4f94-9992-f944bb0519ee-utilities\") pod \"redhat-operators-hhfhz\" (UID: \"97c38c52-062a-4f94-9992-f944bb0519ee\") " pod="openshift-marketplace/redhat-operators-hhfhz" Dec 15 05:39:40 crc kubenswrapper[4747]: I1215 05:39:40.801346 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f228\" (UniqueName: \"kubernetes.io/projected/97c38c52-062a-4f94-9992-f944bb0519ee-kube-api-access-8f228\") pod \"redhat-operators-hhfhz\" (UID: \"97c38c52-062a-4f94-9992-f944bb0519ee\") " pod="openshift-marketplace/redhat-operators-hhfhz" Dec 15 05:39:40 crc kubenswrapper[4747]: I1215 05:39:40.846741 4747 patch_prober.go:28] interesting pod/router-default-5444994796-gvbxq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 15 05:39:40 crc kubenswrapper[4747]: [-]has-synced failed: reason withheld Dec 15 05:39:40 crc kubenswrapper[4747]: [+]process-running ok Dec 15 05:39:40 crc kubenswrapper[4747]: healthz check failed Dec 15 05:39:40 crc kubenswrapper[4747]: I1215 05:39:40.847042 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gvbxq" podUID="f9755e7f-72e0-4b8a-94c2-6702dec42d0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 15 05:39:40 crc kubenswrapper[4747]: I1215 05:39:40.898019 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hhfhz" Dec 15 05:39:40 crc kubenswrapper[4747]: I1215 05:39:40.958448 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ldqmm"] Dec 15 05:39:40 crc kubenswrapper[4747]: I1215 05:39:40.959433 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ldqmm" Dec 15 05:39:40 crc kubenswrapper[4747]: I1215 05:39:40.972763 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ldqmm"] Dec 15 05:39:41 crc kubenswrapper[4747]: I1215 05:39:41.052970 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5360354e-e2a9-4bf5-bc74-e1b778b512f5-catalog-content\") pod \"redhat-operators-ldqmm\" (UID: \"5360354e-e2a9-4bf5-bc74-e1b778b512f5\") " pod="openshift-marketplace/redhat-operators-ldqmm" Dec 15 05:39:41 crc kubenswrapper[4747]: I1215 05:39:41.053083 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6ffp\" (UniqueName: \"kubernetes.io/projected/5360354e-e2a9-4bf5-bc74-e1b778b512f5-kube-api-access-b6ffp\") pod \"redhat-operators-ldqmm\" (UID: \"5360354e-e2a9-4bf5-bc74-e1b778b512f5\") " pod="openshift-marketplace/redhat-operators-ldqmm" Dec 15 05:39:41 crc kubenswrapper[4747]: I1215 05:39:41.053155 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5360354e-e2a9-4bf5-bc74-e1b778b512f5-utilities\") pod \"redhat-operators-ldqmm\" (UID: \"5360354e-e2a9-4bf5-bc74-e1b778b512f5\") " pod="openshift-marketplace/redhat-operators-ldqmm" Dec 15 05:39:41 crc kubenswrapper[4747]: I1215 05:39:41.156396 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6ffp\" (UniqueName: \"kubernetes.io/projected/5360354e-e2a9-4bf5-bc74-e1b778b512f5-kube-api-access-b6ffp\") pod \"redhat-operators-ldqmm\" (UID: \"5360354e-e2a9-4bf5-bc74-e1b778b512f5\") " pod="openshift-marketplace/redhat-operators-ldqmm" Dec 15 05:39:41 crc kubenswrapper[4747]: I1215 05:39:41.156687 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5360354e-e2a9-4bf5-bc74-e1b778b512f5-utilities\") pod \"redhat-operators-ldqmm\" (UID: \"5360354e-e2a9-4bf5-bc74-e1b778b512f5\") " pod="openshift-marketplace/redhat-operators-ldqmm" Dec 15 05:39:41 crc kubenswrapper[4747]: I1215 05:39:41.156746 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5360354e-e2a9-4bf5-bc74-e1b778b512f5-catalog-content\") pod \"redhat-operators-ldqmm\" (UID: \"5360354e-e2a9-4bf5-bc74-e1b778b512f5\") " pod="openshift-marketplace/redhat-operators-ldqmm" Dec 15 05:39:41 crc kubenswrapper[4747]: I1215 05:39:41.157423 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5360354e-e2a9-4bf5-bc74-e1b778b512f5-catalog-content\") pod \"redhat-operators-ldqmm\" (UID: \"5360354e-e2a9-4bf5-bc74-e1b778b512f5\") " pod="openshift-marketplace/redhat-operators-ldqmm" Dec 15 05:39:41 crc kubenswrapper[4747]: I1215 05:39:41.158200 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5360354e-e2a9-4bf5-bc74-e1b778b512f5-utilities\") pod \"redhat-operators-ldqmm\" (UID: \"5360354e-e2a9-4bf5-bc74-e1b778b512f5\") " pod="openshift-marketplace/redhat-operators-ldqmm" Dec 15 05:39:41 crc kubenswrapper[4747]: I1215 05:39:41.180141 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6ffp\" (UniqueName: \"kubernetes.io/projected/5360354e-e2a9-4bf5-bc74-e1b778b512f5-kube-api-access-b6ffp\") pod \"redhat-operators-ldqmm\" (UID: \"5360354e-e2a9-4bf5-bc74-e1b778b512f5\") " pod="openshift-marketplace/redhat-operators-ldqmm" Dec 15 05:39:41 crc kubenswrapper[4747]: I1215 05:39:41.187458 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hhfhz"] Dec 15 05:39:41 crc kubenswrapper[4747]: I1215 05:39:41.358527 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-ml4rr" Dec 15 05:39:41 crc kubenswrapper[4747]: I1215 05:39:41.358573 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-ml4rr" Dec 15 05:39:41 crc kubenswrapper[4747]: I1215 05:39:41.364471 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-ml4rr" Dec 15 05:39:41 crc kubenswrapper[4747]: I1215 05:39:41.381106 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ldqmm" Dec 15 05:39:41 crc kubenswrapper[4747]: I1215 05:39:41.588688 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ldqmm"] Dec 15 05:39:41 crc kubenswrapper[4747]: I1215 05:39:41.732448 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-5jmx6" Dec 15 05:39:41 crc kubenswrapper[4747]: I1215 05:39:41.736874 4747 generic.go:334] "Generic (PLEG): container finished" podID="97c38c52-062a-4f94-9992-f944bb0519ee" containerID="c2676ffce716097e93af38f66714dc6a4bdd1c307b5e4bd219e8c547dc0f9b2e" exitCode=0 Dec 15 05:39:41 crc kubenswrapper[4747]: I1215 05:39:41.737290 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hhfhz" event={"ID":"97c38c52-062a-4f94-9992-f944bb0519ee","Type":"ContainerDied","Data":"c2676ffce716097e93af38f66714dc6a4bdd1c307b5e4bd219e8c547dc0f9b2e"} Dec 15 05:39:41 crc kubenswrapper[4747]: I1215 05:39:41.737372 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hhfhz" event={"ID":"97c38c52-062a-4f94-9992-f944bb0519ee","Type":"ContainerStarted","Data":"af137d78b247057d75652eaaef9d3682d2f68a85a7fa97ea2a6b813885112a5e"} Dec 15 05:39:41 crc kubenswrapper[4747]: I1215 05:39:41.746028 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ldqmm" event={"ID":"5360354e-e2a9-4bf5-bc74-e1b778b512f5","Type":"ContainerStarted","Data":"b569fe85f9cb897be36614cb2059569b0bcafad07a4a07465542bcb702423ece"} Dec 15 05:39:41 crc kubenswrapper[4747]: I1215 05:39:41.749954 4747 generic.go:334] "Generic (PLEG): container finished" podID="166f8882-1da0-434d-9b5f-79a43223e9fb" containerID="47f5d50889c0cccb01aeea61be173c530bcc04a75f299198dc68bde21ee1a992" exitCode=0 Dec 15 05:39:41 crc kubenswrapper[4747]: I1215 05:39:41.750461 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"166f8882-1da0-434d-9b5f-79a43223e9fb","Type":"ContainerDied","Data":"47f5d50889c0cccb01aeea61be173c530bcc04a75f299198dc68bde21ee1a992"} Dec 15 05:39:41 crc kubenswrapper[4747]: I1215 05:39:41.768442 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-2sdgk" Dec 15 05:39:41 crc kubenswrapper[4747]: I1215 05:39:41.768690 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-ml4rr" Dec 15 05:39:41 crc kubenswrapper[4747]: I1215 05:39:41.769237 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-2sdgk" Dec 15 05:39:41 crc kubenswrapper[4747]: I1215 05:39:41.772302 4747 patch_prober.go:28] interesting pod/console-f9d7485db-2sdgk container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.32:8443/health\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Dec 15 05:39:41 crc kubenswrapper[4747]: I1215 05:39:41.772341 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-2sdgk" podUID="17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f" containerName="console" probeResult="failure" output="Get \"https://10.217.0.32:8443/health\": dial tcp 10.217.0.32:8443: connect: connection refused" Dec 15 05:39:41 crc kubenswrapper[4747]: I1215 05:39:41.841640 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-gvbxq" Dec 15 05:39:41 crc kubenswrapper[4747]: I1215 05:39:41.852566 4747 patch_prober.go:28] interesting pod/router-default-5444994796-gvbxq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 15 05:39:41 crc kubenswrapper[4747]: [-]has-synced failed: reason withheld Dec 15 05:39:41 crc kubenswrapper[4747]: [+]process-running ok Dec 15 05:39:41 crc kubenswrapper[4747]: healthz check failed Dec 15 05:39:41 crc kubenswrapper[4747]: I1215 05:39:41.852640 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gvbxq" podUID="f9755e7f-72e0-4b8a-94c2-6702dec42d0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 15 05:39:42 crc kubenswrapper[4747]: I1215 05:39:42.055754 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-75dh6" Dec 15 05:39:42 crc kubenswrapper[4747]: I1215 05:39:42.494762 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 15 05:39:42 crc kubenswrapper[4747]: I1215 05:39:42.496158 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 15 05:39:42 crc kubenswrapper[4747]: I1215 05:39:42.499504 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 15 05:39:42 crc kubenswrapper[4747]: I1215 05:39:42.499721 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 15 05:39:42 crc kubenswrapper[4747]: I1215 05:39:42.500852 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 15 05:39:42 crc kubenswrapper[4747]: I1215 05:39:42.597585 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/97e304e6-0512-4367-9811-e41ccac42926-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"97e304e6-0512-4367-9811-e41ccac42926\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 15 05:39:42 crc kubenswrapper[4747]: I1215 05:39:42.597715 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/97e304e6-0512-4367-9811-e41ccac42926-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"97e304e6-0512-4367-9811-e41ccac42926\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 15 05:39:42 crc kubenswrapper[4747]: I1215 05:39:42.698683 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/97e304e6-0512-4367-9811-e41ccac42926-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"97e304e6-0512-4367-9811-e41ccac42926\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 15 05:39:42 crc kubenswrapper[4747]: I1215 05:39:42.698752 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:39:42 crc kubenswrapper[4747]: I1215 05:39:42.698798 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/97e304e6-0512-4367-9811-e41ccac42926-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"97e304e6-0512-4367-9811-e41ccac42926\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 15 05:39:42 crc kubenswrapper[4747]: I1215 05:39:42.698841 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:39:42 crc kubenswrapper[4747]: I1215 05:39:42.698880 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:39:42 crc kubenswrapper[4747]: I1215 05:39:42.698920 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:39:42 crc kubenswrapper[4747]: I1215 05:39:42.699447 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/97e304e6-0512-4367-9811-e41ccac42926-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"97e304e6-0512-4367-9811-e41ccac42926\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 15 05:39:42 crc kubenswrapper[4747]: I1215 05:39:42.699843 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:39:42 crc kubenswrapper[4747]: I1215 05:39:42.705740 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:39:42 crc kubenswrapper[4747]: I1215 05:39:42.709122 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:39:42 crc kubenswrapper[4747]: I1215 05:39:42.713799 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/97e304e6-0512-4367-9811-e41ccac42926-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"97e304e6-0512-4367-9811-e41ccac42926\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 15 05:39:42 crc kubenswrapper[4747]: I1215 05:39:42.722140 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:39:42 crc kubenswrapper[4747]: I1215 05:39:42.740402 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 15 05:39:42 crc kubenswrapper[4747]: I1215 05:39:42.751293 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 15 05:39:42 crc kubenswrapper[4747]: I1215 05:39:42.756212 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:39:42 crc kubenswrapper[4747]: I1215 05:39:42.775213 4747 generic.go:334] "Generic (PLEG): container finished" podID="5360354e-e2a9-4bf5-bc74-e1b778b512f5" containerID="4b212954a1ca6b472e1ff65ccbcdb0b5977e344ecc276616a172293ccf4914fd" exitCode=0 Dec 15 05:39:42 crc kubenswrapper[4747]: I1215 05:39:42.775333 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ldqmm" event={"ID":"5360354e-e2a9-4bf5-bc74-e1b778b512f5","Type":"ContainerDied","Data":"4b212954a1ca6b472e1ff65ccbcdb0b5977e344ecc276616a172293ccf4914fd"} Dec 15 05:39:42 crc kubenswrapper[4747]: I1215 05:39:42.816238 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 15 05:39:42 crc kubenswrapper[4747]: I1215 05:39:42.844270 4747 patch_prober.go:28] interesting pod/router-default-5444994796-gvbxq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 15 05:39:42 crc kubenswrapper[4747]: [-]has-synced failed: reason withheld Dec 15 05:39:42 crc kubenswrapper[4747]: [+]process-running ok Dec 15 05:39:42 crc kubenswrapper[4747]: healthz check failed Dec 15 05:39:42 crc kubenswrapper[4747]: I1215 05:39:42.844514 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gvbxq" podUID="f9755e7f-72e0-4b8a-94c2-6702dec42d0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 15 05:39:43 crc kubenswrapper[4747]: I1215 05:39:43.094003 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 15 05:39:43 crc kubenswrapper[4747]: I1215 05:39:43.210309 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/166f8882-1da0-434d-9b5f-79a43223e9fb-kubelet-dir\") pod \"166f8882-1da0-434d-9b5f-79a43223e9fb\" (UID: \"166f8882-1da0-434d-9b5f-79a43223e9fb\") " Dec 15 05:39:43 crc kubenswrapper[4747]: I1215 05:39:43.210431 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/166f8882-1da0-434d-9b5f-79a43223e9fb-kube-api-access\") pod \"166f8882-1da0-434d-9b5f-79a43223e9fb\" (UID: \"166f8882-1da0-434d-9b5f-79a43223e9fb\") " Dec 15 05:39:43 crc kubenswrapper[4747]: I1215 05:39:43.210429 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/166f8882-1da0-434d-9b5f-79a43223e9fb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "166f8882-1da0-434d-9b5f-79a43223e9fb" (UID: "166f8882-1da0-434d-9b5f-79a43223e9fb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 15 05:39:43 crc kubenswrapper[4747]: I1215 05:39:43.210762 4747 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/166f8882-1da0-434d-9b5f-79a43223e9fb-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 15 05:39:43 crc kubenswrapper[4747]: I1215 05:39:43.215436 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/166f8882-1da0-434d-9b5f-79a43223e9fb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "166f8882-1da0-434d-9b5f-79a43223e9fb" (UID: "166f8882-1da0-434d-9b5f-79a43223e9fb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:39:43 crc kubenswrapper[4747]: I1215 05:39:43.312255 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/166f8882-1da0-434d-9b5f-79a43223e9fb-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 15 05:39:43 crc kubenswrapper[4747]: W1215 05:39:43.477057 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-a660dbf1c3f579f4ddf54ed2dc58efb9c754698aa10ab97087f6356ffa9c0f11 WatchSource:0}: Error finding container a660dbf1c3f579f4ddf54ed2dc58efb9c754698aa10ab97087f6356ffa9c0f11: Status 404 returned error can't find the container with id a660dbf1c3f579f4ddf54ed2dc58efb9c754698aa10ab97087f6356ffa9c0f11 Dec 15 05:39:43 crc kubenswrapper[4747]: W1215 05:39:43.493710 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-220a65184f2a3c2bea6039fcc9dc6642f47b90302f205e6f7afc245442f25ffe WatchSource:0}: Error finding container 220a65184f2a3c2bea6039fcc9dc6642f47b90302f205e6f7afc245442f25ffe: Status 404 returned error can't find the container with id 220a65184f2a3c2bea6039fcc9dc6642f47b90302f205e6f7afc245442f25ffe Dec 15 05:39:43 crc kubenswrapper[4747]: I1215 05:39:43.508474 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 15 05:39:43 crc kubenswrapper[4747]: I1215 05:39:43.791676 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"fc06fba3a7abd74a8f98fa35e0cd1669b6ac77ce480eca3ed24baccedacc0bb8"} Dec 15 05:39:43 crc kubenswrapper[4747]: I1215 05:39:43.791722 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"78bc7e72a643a8b31d638911db8e012b77a9cb0b9fce97ee26836c1b103ab0cf"} Dec 15 05:39:43 crc kubenswrapper[4747]: I1215 05:39:43.816540 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"ac9427d5b91775b53bcba7d7def1315b0a41119b7ab1cb37d1101c66736f6dd9"} Dec 15 05:39:43 crc kubenswrapper[4747]: I1215 05:39:43.816589 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"a660dbf1c3f579f4ddf54ed2dc58efb9c754698aa10ab97087f6356ffa9c0f11"} Dec 15 05:39:43 crc kubenswrapper[4747]: I1215 05:39:43.818561 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"220a65184f2a3c2bea6039fcc9dc6642f47b90302f205e6f7afc245442f25ffe"} Dec 15 05:39:43 crc kubenswrapper[4747]: I1215 05:39:43.842609 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"166f8882-1da0-434d-9b5f-79a43223e9fb","Type":"ContainerDied","Data":"a48a85949173981cc735f750ae08de7efc2dc959f08b930c0ddd75bae41520b1"} Dec 15 05:39:43 crc kubenswrapper[4747]: I1215 05:39:43.842660 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a48a85949173981cc735f750ae08de7efc2dc959f08b930c0ddd75bae41520b1" Dec 15 05:39:43 crc kubenswrapper[4747]: I1215 05:39:43.842745 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 15 05:39:43 crc kubenswrapper[4747]: I1215 05:39:43.850894 4747 patch_prober.go:28] interesting pod/router-default-5444994796-gvbxq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 15 05:39:43 crc kubenswrapper[4747]: [-]has-synced failed: reason withheld Dec 15 05:39:43 crc kubenswrapper[4747]: [+]process-running ok Dec 15 05:39:43 crc kubenswrapper[4747]: healthz check failed Dec 15 05:39:43 crc kubenswrapper[4747]: I1215 05:39:43.850979 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gvbxq" podUID="f9755e7f-72e0-4b8a-94c2-6702dec42d0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 15 05:39:43 crc kubenswrapper[4747]: I1215 05:39:43.860879 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"97e304e6-0512-4367-9811-e41ccac42926","Type":"ContainerStarted","Data":"8488dc8f66909090aa739c50b6879473338bc8a7db292b9938f2f2eb9f68a2ea"} Dec 15 05:39:44 crc kubenswrapper[4747]: I1215 05:39:44.308884 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-k72td" Dec 15 05:39:44 crc kubenswrapper[4747]: I1215 05:39:44.844455 4747 patch_prober.go:28] interesting pod/router-default-5444994796-gvbxq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 15 05:39:44 crc kubenswrapper[4747]: [-]has-synced failed: reason withheld Dec 15 05:39:44 crc kubenswrapper[4747]: [+]process-running ok Dec 15 05:39:44 crc kubenswrapper[4747]: healthz check failed Dec 15 05:39:44 crc kubenswrapper[4747]: I1215 05:39:44.844812 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gvbxq" podUID="f9755e7f-72e0-4b8a-94c2-6702dec42d0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 15 05:39:44 crc kubenswrapper[4747]: I1215 05:39:44.867761 4747 generic.go:334] "Generic (PLEG): container finished" podID="97e304e6-0512-4367-9811-e41ccac42926" containerID="b2a52d47a9b2b3b42fb7c2bffc509a068c4c24929954dbc91e15f7132932ad36" exitCode=0 Dec 15 05:39:44 crc kubenswrapper[4747]: I1215 05:39:44.867861 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"97e304e6-0512-4367-9811-e41ccac42926","Type":"ContainerDied","Data":"b2a52d47a9b2b3b42fb7c2bffc509a068c4c24929954dbc91e15f7132932ad36"} Dec 15 05:39:45 crc kubenswrapper[4747]: I1215 05:39:45.845904 4747 patch_prober.go:28] interesting pod/router-default-5444994796-gvbxq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 15 05:39:45 crc kubenswrapper[4747]: [-]has-synced failed: reason withheld Dec 15 05:39:45 crc kubenswrapper[4747]: [+]process-running ok Dec 15 05:39:45 crc kubenswrapper[4747]: healthz check failed Dec 15 05:39:45 crc kubenswrapper[4747]: I1215 05:39:45.845982 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gvbxq" podUID="f9755e7f-72e0-4b8a-94c2-6702dec42d0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 15 05:39:46 crc kubenswrapper[4747]: I1215 05:39:46.847849 4747 patch_prober.go:28] interesting pod/router-default-5444994796-gvbxq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 15 05:39:46 crc kubenswrapper[4747]: [-]has-synced failed: reason withheld Dec 15 05:39:46 crc kubenswrapper[4747]: [+]process-running ok Dec 15 05:39:46 crc kubenswrapper[4747]: healthz check failed Dec 15 05:39:46 crc kubenswrapper[4747]: I1215 05:39:46.848194 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gvbxq" podUID="f9755e7f-72e0-4b8a-94c2-6702dec42d0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 15 05:39:47 crc kubenswrapper[4747]: I1215 05:39:47.846952 4747 patch_prober.go:28] interesting pod/router-default-5444994796-gvbxq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 15 05:39:47 crc kubenswrapper[4747]: [-]has-synced failed: reason withheld Dec 15 05:39:47 crc kubenswrapper[4747]: [+]process-running ok Dec 15 05:39:47 crc kubenswrapper[4747]: healthz check failed Dec 15 05:39:47 crc kubenswrapper[4747]: I1215 05:39:47.847019 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gvbxq" podUID="f9755e7f-72e0-4b8a-94c2-6702dec42d0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 15 05:39:48 crc kubenswrapper[4747]: I1215 05:39:48.392320 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 15 05:39:48 crc kubenswrapper[4747]: I1215 05:39:48.518308 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/97e304e6-0512-4367-9811-e41ccac42926-kubelet-dir\") pod \"97e304e6-0512-4367-9811-e41ccac42926\" (UID: \"97e304e6-0512-4367-9811-e41ccac42926\") " Dec 15 05:39:48 crc kubenswrapper[4747]: I1215 05:39:48.518380 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97e304e6-0512-4367-9811-e41ccac42926-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "97e304e6-0512-4367-9811-e41ccac42926" (UID: "97e304e6-0512-4367-9811-e41ccac42926"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 15 05:39:48 crc kubenswrapper[4747]: I1215 05:39:48.518405 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/97e304e6-0512-4367-9811-e41ccac42926-kube-api-access\") pod \"97e304e6-0512-4367-9811-e41ccac42926\" (UID: \"97e304e6-0512-4367-9811-e41ccac42926\") " Dec 15 05:39:48 crc kubenswrapper[4747]: I1215 05:39:48.518701 4747 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/97e304e6-0512-4367-9811-e41ccac42926-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 15 05:39:48 crc kubenswrapper[4747]: I1215 05:39:48.535425 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97e304e6-0512-4367-9811-e41ccac42926-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "97e304e6-0512-4367-9811-e41ccac42926" (UID: "97e304e6-0512-4367-9811-e41ccac42926"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:39:48 crc kubenswrapper[4747]: I1215 05:39:48.619399 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/97e304e6-0512-4367-9811-e41ccac42926-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 15 05:39:48 crc kubenswrapper[4747]: I1215 05:39:48.846374 4747 patch_prober.go:28] interesting pod/router-default-5444994796-gvbxq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 15 05:39:48 crc kubenswrapper[4747]: [-]has-synced failed: reason withheld Dec 15 05:39:48 crc kubenswrapper[4747]: [+]process-running ok Dec 15 05:39:48 crc kubenswrapper[4747]: healthz check failed Dec 15 05:39:48 crc kubenswrapper[4747]: I1215 05:39:48.846450 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gvbxq" podUID="f9755e7f-72e0-4b8a-94c2-6702dec42d0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 15 05:39:48 crc kubenswrapper[4747]: I1215 05:39:48.913390 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"3b350239979c12f456f5647d380d1f83486428626f62c314ff0a7b584fa5d8ca"} Dec 15 05:39:48 crc kubenswrapper[4747]: I1215 05:39:48.914229 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:39:48 crc kubenswrapper[4747]: I1215 05:39:48.923956 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"97e304e6-0512-4367-9811-e41ccac42926","Type":"ContainerDied","Data":"8488dc8f66909090aa739c50b6879473338bc8a7db292b9938f2f2eb9f68a2ea"} Dec 15 05:39:48 crc kubenswrapper[4747]: I1215 05:39:48.924006 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8488dc8f66909090aa739c50b6879473338bc8a7db292b9938f2f2eb9f68a2ea" Dec 15 05:39:48 crc kubenswrapper[4747]: I1215 05:39:48.924107 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 15 05:39:49 crc kubenswrapper[4747]: I1215 05:39:49.843726 4747 patch_prober.go:28] interesting pod/router-default-5444994796-gvbxq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 15 05:39:49 crc kubenswrapper[4747]: [-]has-synced failed: reason withheld Dec 15 05:39:49 crc kubenswrapper[4747]: [+]process-running ok Dec 15 05:39:49 crc kubenswrapper[4747]: healthz check failed Dec 15 05:39:49 crc kubenswrapper[4747]: I1215 05:39:49.844050 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gvbxq" podUID="f9755e7f-72e0-4b8a-94c2-6702dec42d0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 15 05:39:50 crc kubenswrapper[4747]: I1215 05:39:50.843288 4747 patch_prober.go:28] interesting pod/router-default-5444994796-gvbxq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 15 05:39:50 crc kubenswrapper[4747]: [-]has-synced failed: reason withheld Dec 15 05:39:50 crc kubenswrapper[4747]: [+]process-running ok Dec 15 05:39:50 crc kubenswrapper[4747]: healthz check failed Dec 15 05:39:50 crc kubenswrapper[4747]: I1215 05:39:50.843357 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gvbxq" podUID="f9755e7f-72e0-4b8a-94c2-6702dec42d0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 15 05:39:51 crc kubenswrapper[4747]: I1215 05:39:51.755336 4747 patch_prober.go:28] interesting pod/console-f9d7485db-2sdgk container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.32:8443/health\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Dec 15 05:39:51 crc kubenswrapper[4747]: I1215 05:39:51.755437 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-2sdgk" podUID="17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f" containerName="console" probeResult="failure" output="Get \"https://10.217.0.32:8443/health\": dial tcp 10.217.0.32:8443: connect: connection refused" Dec 15 05:39:51 crc kubenswrapper[4747]: I1215 05:39:51.844589 4747 patch_prober.go:28] interesting pod/router-default-5444994796-gvbxq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 15 05:39:51 crc kubenswrapper[4747]: [-]has-synced failed: reason withheld Dec 15 05:39:51 crc kubenswrapper[4747]: [+]process-running ok Dec 15 05:39:51 crc kubenswrapper[4747]: healthz check failed Dec 15 05:39:51 crc kubenswrapper[4747]: I1215 05:39:51.844669 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gvbxq" podUID="f9755e7f-72e0-4b8a-94c2-6702dec42d0b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 15 05:39:52 crc kubenswrapper[4747]: I1215 05:39:52.844449 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-gvbxq" Dec 15 05:39:52 crc kubenswrapper[4747]: I1215 05:39:52.846917 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-gvbxq" Dec 15 05:39:54 crc kubenswrapper[4747]: I1215 05:39:54.966193 4747 generic.go:334] "Generic (PLEG): container finished" podID="496c9f6b-020f-4ba2-9031-4dfee47f18ab" containerID="d4b046f1ff16694b9ea04214dd8bfc9a280c1a011e3bcbc0de9827e687ab84ca" exitCode=0 Dec 15 05:39:54 crc kubenswrapper[4747]: I1215 05:39:54.966248 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxjhc" event={"ID":"496c9f6b-020f-4ba2-9031-4dfee47f18ab","Type":"ContainerDied","Data":"d4b046f1ff16694b9ea04214dd8bfc9a280c1a011e3bcbc0de9827e687ab84ca"} Dec 15 05:39:54 crc kubenswrapper[4747]: I1215 05:39:54.970050 4747 generic.go:334] "Generic (PLEG): container finished" podID="7f997591-b82e-4c3b-85b9-5106a8168eec" containerID="6e2eb7d06eff800e7b7b4a705a8f1a6f0371e0a79dae3f16aba19c75d23ab940" exitCode=0 Dec 15 05:39:54 crc kubenswrapper[4747]: I1215 05:39:54.970104 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-88ln6" event={"ID":"7f997591-b82e-4c3b-85b9-5106a8168eec","Type":"ContainerDied","Data":"6e2eb7d06eff800e7b7b4a705a8f1a6f0371e0a79dae3f16aba19c75d23ab940"} Dec 15 05:39:55 crc kubenswrapper[4747]: I1215 05:39:55.812313 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fca0b2d2-cd19-409a-aa6d-df8b295adf62-metrics-certs\") pod \"network-metrics-daemon-4nn8g\" (UID: \"fca0b2d2-cd19-409a-aa6d-df8b295adf62\") " pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:39:55 crc kubenswrapper[4747]: I1215 05:39:55.818356 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fca0b2d2-cd19-409a-aa6d-df8b295adf62-metrics-certs\") pod \"network-metrics-daemon-4nn8g\" (UID: \"fca0b2d2-cd19-409a-aa6d-df8b295adf62\") " pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:39:55 crc kubenswrapper[4747]: I1215 05:39:55.946127 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4nn8g" Dec 15 05:39:57 crc kubenswrapper[4747]: I1215 05:39:57.223793 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:39:58 crc kubenswrapper[4747]: I1215 05:39:58.865415 4747 patch_prober.go:28] interesting pod/machine-config-daemon-nldtn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 15 05:39:58 crc kubenswrapper[4747]: I1215 05:39:58.865831 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 15 05:40:01 crc kubenswrapper[4747]: I1215 05:40:01.478413 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-4nn8g"] Dec 15 05:40:01 crc kubenswrapper[4747]: W1215 05:40:01.486514 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfca0b2d2_cd19_409a_aa6d_df8b295adf62.slice/crio-d7ff8bc2926dbc4b36dcd65b5ee34c3c8dfa9a5b851e1fc5ac2f030544549420 WatchSource:0}: Error finding container d7ff8bc2926dbc4b36dcd65b5ee34c3c8dfa9a5b851e1fc5ac2f030544549420: Status 404 returned error can't find the container with id d7ff8bc2926dbc4b36dcd65b5ee34c3c8dfa9a5b851e1fc5ac2f030544549420 Dec 15 05:40:01 crc kubenswrapper[4747]: I1215 05:40:01.760509 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-2sdgk" Dec 15 05:40:01 crc kubenswrapper[4747]: I1215 05:40:01.763227 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-2sdgk" Dec 15 05:40:02 crc kubenswrapper[4747]: I1215 05:40:02.012712 4747 generic.go:334] "Generic (PLEG): container finished" podID="9a524d92-a1c1-4494-b487-ba0df0e6a1ec" containerID="5858e4871934c59c25ec16c559ae584e7c4ddf029520ef607e2de41fab4d16df" exitCode=0 Dec 15 05:40:02 crc kubenswrapper[4747]: I1215 05:40:02.012803 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f66lm" event={"ID":"9a524d92-a1c1-4494-b487-ba0df0e6a1ec","Type":"ContainerDied","Data":"5858e4871934c59c25ec16c559ae584e7c4ddf029520ef607e2de41fab4d16df"} Dec 15 05:40:02 crc kubenswrapper[4747]: I1215 05:40:02.015512 4747 generic.go:334] "Generic (PLEG): container finished" podID="3a8d5b87-e7b5-491f-aee8-98aa02ba9a14" containerID="77bfb5f5d9295fef2137ab15e8017773629c9a67caf39ffafa175f2fb8a87537" exitCode=0 Dec 15 05:40:02 crc kubenswrapper[4747]: I1215 05:40:02.015588 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lwqpd" event={"ID":"3a8d5b87-e7b5-491f-aee8-98aa02ba9a14","Type":"ContainerDied","Data":"77bfb5f5d9295fef2137ab15e8017773629c9a67caf39ffafa175f2fb8a87537"} Dec 15 05:40:02 crc kubenswrapper[4747]: I1215 05:40:02.018001 4747 generic.go:334] "Generic (PLEG): container finished" podID="68eca474-5187-41ca-b67f-cb316a4ab410" containerID="2c8c00ae211fd6e8bde8305f0f2fab031c657c4f6cd78a5b84b2dc426ba60b0d" exitCode=0 Dec 15 05:40:02 crc kubenswrapper[4747]: I1215 05:40:02.018059 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zrdj" event={"ID":"68eca474-5187-41ca-b67f-cb316a4ab410","Type":"ContainerDied","Data":"2c8c00ae211fd6e8bde8305f0f2fab031c657c4f6cd78a5b84b2dc426ba60b0d"} Dec 15 05:40:02 crc kubenswrapper[4747]: I1215 05:40:02.023572 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-88ln6" event={"ID":"7f997591-b82e-4c3b-85b9-5106a8168eec","Type":"ContainerStarted","Data":"962a68a146298e491c0a7d9eac0fb8dbac9686955ba64532efeabfcd4122db2d"} Dec 15 05:40:02 crc kubenswrapper[4747]: I1215 05:40:02.025901 4747 generic.go:334] "Generic (PLEG): container finished" podID="95e7ce44-2981-4ea8-91a9-a9e897bdc80b" containerID="ebbd128389ddc261843cdb2409d0c92ec0e597b7d6ef8046a3ada546cee18fee" exitCode=0 Dec 15 05:40:02 crc kubenswrapper[4747]: I1215 05:40:02.026035 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jwzfq" event={"ID":"95e7ce44-2981-4ea8-91a9-a9e897bdc80b","Type":"ContainerDied","Data":"ebbd128389ddc261843cdb2409d0c92ec0e597b7d6ef8046a3ada546cee18fee"} Dec 15 05:40:02 crc kubenswrapper[4747]: I1215 05:40:02.030105 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxjhc" event={"ID":"496c9f6b-020f-4ba2-9031-4dfee47f18ab","Type":"ContainerStarted","Data":"b993594355de75c6a85a73c13dff97f4d7f7ad7eb0dd3b68fcc4e383a6b457ed"} Dec 15 05:40:02 crc kubenswrapper[4747]: I1215 05:40:02.032864 4747 generic.go:334] "Generic (PLEG): container finished" podID="97c38c52-062a-4f94-9992-f944bb0519ee" containerID="fe86caebfc55bcc4760020b118e3f8649e759a7f0e83ef55fdb5896204e4dd50" exitCode=0 Dec 15 05:40:02 crc kubenswrapper[4747]: I1215 05:40:02.032896 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hhfhz" event={"ID":"97c38c52-062a-4f94-9992-f944bb0519ee","Type":"ContainerDied","Data":"fe86caebfc55bcc4760020b118e3f8649e759a7f0e83ef55fdb5896204e4dd50"} Dec 15 05:40:02 crc kubenswrapper[4747]: I1215 05:40:02.038254 4747 generic.go:334] "Generic (PLEG): container finished" podID="5360354e-e2a9-4bf5-bc74-e1b778b512f5" containerID="9a52824445169a3b9f679187633ed60ce16cd990d7f4ffa9462892fe305b38fe" exitCode=0 Dec 15 05:40:02 crc kubenswrapper[4747]: I1215 05:40:02.038715 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ldqmm" event={"ID":"5360354e-e2a9-4bf5-bc74-e1b778b512f5","Type":"ContainerDied","Data":"9a52824445169a3b9f679187633ed60ce16cd990d7f4ffa9462892fe305b38fe"} Dec 15 05:40:02 crc kubenswrapper[4747]: I1215 05:40:02.043903 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-4nn8g" event={"ID":"fca0b2d2-cd19-409a-aa6d-df8b295adf62","Type":"ContainerStarted","Data":"2ed7b14dc96782e7b2e90f14b85d878681432da7fce062c069484bcb1ea2a91d"} Dec 15 05:40:02 crc kubenswrapper[4747]: I1215 05:40:02.043975 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-4nn8g" event={"ID":"fca0b2d2-cd19-409a-aa6d-df8b295adf62","Type":"ContainerStarted","Data":"d7ff8bc2926dbc4b36dcd65b5ee34c3c8dfa9a5b851e1fc5ac2f030544549420"} Dec 15 05:40:02 crc kubenswrapper[4747]: I1215 05:40:02.111551 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gxjhc" podStartSLOduration=2.691235361 podStartE2EDuration="23.111527891s" podCreationTimestamp="2025-12-15 05:39:39 +0000 UTC" firstStartedPulling="2025-12-15 05:39:40.714651761 +0000 UTC m=+144.411163678" lastFinishedPulling="2025-12-15 05:40:01.134944291 +0000 UTC m=+164.831456208" observedRunningTime="2025-12-15 05:40:02.108454953 +0000 UTC m=+165.804966870" watchObservedRunningTime="2025-12-15 05:40:02.111527891 +0000 UTC m=+165.808039808" Dec 15 05:40:02 crc kubenswrapper[4747]: I1215 05:40:02.144642 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-88ln6" podStartSLOduration=2.721015961 podStartE2EDuration="23.144604688s" podCreationTimestamp="2025-12-15 05:39:39 +0000 UTC" firstStartedPulling="2025-12-15 05:39:40.711381953 +0000 UTC m=+144.407893871" lastFinishedPulling="2025-12-15 05:40:01.134970681 +0000 UTC m=+164.831482598" observedRunningTime="2025-12-15 05:40:02.142125528 +0000 UTC m=+165.838637435" watchObservedRunningTime="2025-12-15 05:40:02.144604688 +0000 UTC m=+165.841116716" Dec 15 05:40:03 crc kubenswrapper[4747]: I1215 05:40:03.053362 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lwqpd" event={"ID":"3a8d5b87-e7b5-491f-aee8-98aa02ba9a14","Type":"ContainerStarted","Data":"66266cf2daa9cad199afda7e01118b190bf4c155092796db134168d0a5415ba2"} Dec 15 05:40:03 crc kubenswrapper[4747]: I1215 05:40:03.056044 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zrdj" event={"ID":"68eca474-5187-41ca-b67f-cb316a4ab410","Type":"ContainerStarted","Data":"e5a5c4d573912ab05e31be526d1bcc2fab630bc4d43f092db81c43e6d63448ab"} Dec 15 05:40:03 crc kubenswrapper[4747]: I1215 05:40:03.058233 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jwzfq" event={"ID":"95e7ce44-2981-4ea8-91a9-a9e897bdc80b","Type":"ContainerStarted","Data":"ef9cbd583d77724f6d9f9331c4d62112d0c9216606570fdedbb6ab940f58926b"} Dec 15 05:40:03 crc kubenswrapper[4747]: I1215 05:40:03.060967 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f66lm" event={"ID":"9a524d92-a1c1-4494-b487-ba0df0e6a1ec","Type":"ContainerStarted","Data":"e45fe7d5f93326148303da0b04cf5d9ee9bdcdd1dd56d6b397cf9d7ac983ede3"} Dec 15 05:40:03 crc kubenswrapper[4747]: I1215 05:40:03.063208 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hhfhz" event={"ID":"97c38c52-062a-4f94-9992-f944bb0519ee","Type":"ContainerStarted","Data":"a55b037d1361bf6bdeb116e53646d89dd20ba8c32333ecf135a3cbfff6a69724"} Dec 15 05:40:03 crc kubenswrapper[4747]: I1215 05:40:03.065624 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ldqmm" event={"ID":"5360354e-e2a9-4bf5-bc74-e1b778b512f5","Type":"ContainerStarted","Data":"3837e4714103e093822a038469e0d44673a8ed182a859da4688244a3fb067e1f"} Dec 15 05:40:03 crc kubenswrapper[4747]: I1215 05:40:03.067674 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-4nn8g" event={"ID":"fca0b2d2-cd19-409a-aa6d-df8b295adf62","Type":"ContainerStarted","Data":"c019d4a46305ce04ea37020ca1ff2adfd8d8f1231f4c12481c9f785753d74bee"} Dec 15 05:40:03 crc kubenswrapper[4747]: I1215 05:40:03.074737 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lwqpd" podStartSLOduration=2.132659082 podStartE2EDuration="26.074722828s" podCreationTimestamp="2025-12-15 05:39:37 +0000 UTC" firstStartedPulling="2025-12-15 05:39:38.672762103 +0000 UTC m=+142.369274010" lastFinishedPulling="2025-12-15 05:40:02.614825839 +0000 UTC m=+166.311337756" observedRunningTime="2025-12-15 05:40:03.073035997 +0000 UTC m=+166.769547914" watchObservedRunningTime="2025-12-15 05:40:03.074722828 +0000 UTC m=+166.771234744" Dec 15 05:40:03 crc kubenswrapper[4747]: I1215 05:40:03.111741 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ldqmm" podStartSLOduration=3.231402532 podStartE2EDuration="23.111729144s" podCreationTimestamp="2025-12-15 05:39:40 +0000 UTC" firstStartedPulling="2025-12-15 05:39:42.823720152 +0000 UTC m=+146.520232069" lastFinishedPulling="2025-12-15 05:40:02.704046764 +0000 UTC m=+166.400558681" observedRunningTime="2025-12-15 05:40:03.108187285 +0000 UTC m=+166.804699202" watchObservedRunningTime="2025-12-15 05:40:03.111729144 +0000 UTC m=+166.808241061" Dec 15 05:40:03 crc kubenswrapper[4747]: I1215 05:40:03.113478 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-f66lm" podStartSLOduration=2.2499218 podStartE2EDuration="26.113469947s" podCreationTimestamp="2025-12-15 05:39:37 +0000 UTC" firstStartedPulling="2025-12-15 05:39:38.694713865 +0000 UTC m=+142.391225783" lastFinishedPulling="2025-12-15 05:40:02.558262023 +0000 UTC m=+166.254773930" observedRunningTime="2025-12-15 05:40:03.095434178 +0000 UTC m=+166.791946095" watchObservedRunningTime="2025-12-15 05:40:03.113469947 +0000 UTC m=+166.809981864" Dec 15 05:40:03 crc kubenswrapper[4747]: I1215 05:40:03.123548 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hhfhz" podStartSLOduration=2.210024584 podStartE2EDuration="23.12353991s" podCreationTimestamp="2025-12-15 05:39:40 +0000 UTC" firstStartedPulling="2025-12-15 05:39:41.744617215 +0000 UTC m=+145.441129131" lastFinishedPulling="2025-12-15 05:40:02.65813255 +0000 UTC m=+166.354644457" observedRunningTime="2025-12-15 05:40:03.120067792 +0000 UTC m=+166.816579709" watchObservedRunningTime="2025-12-15 05:40:03.12353991 +0000 UTC m=+166.820051827" Dec 15 05:40:03 crc kubenswrapper[4747]: I1215 05:40:03.135332 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-4nn8g" podStartSLOduration=149.135316691 podStartE2EDuration="2m29.135316691s" podCreationTimestamp="2025-12-15 05:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:40:03.134525955 +0000 UTC m=+166.831037872" watchObservedRunningTime="2025-12-15 05:40:03.135316691 +0000 UTC m=+166.831828609" Dec 15 05:40:03 crc kubenswrapper[4747]: I1215 05:40:03.169753 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jwzfq" podStartSLOduration=2.292359528 podStartE2EDuration="26.169729903s" podCreationTimestamp="2025-12-15 05:39:37 +0000 UTC" firstStartedPulling="2025-12-15 05:39:38.678675882 +0000 UTC m=+142.375187799" lastFinishedPulling="2025-12-15 05:40:02.556046257 +0000 UTC m=+166.252558174" observedRunningTime="2025-12-15 05:40:03.154024645 +0000 UTC m=+166.850536561" watchObservedRunningTime="2025-12-15 05:40:03.169729903 +0000 UTC m=+166.866241820" Dec 15 05:40:03 crc kubenswrapper[4747]: I1215 05:40:03.171078 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6zrdj" podStartSLOduration=2.114390064 podStartE2EDuration="26.171072967s" podCreationTimestamp="2025-12-15 05:39:37 +0000 UTC" firstStartedPulling="2025-12-15 05:39:38.675782101 +0000 UTC m=+142.372294018" lastFinishedPulling="2025-12-15 05:40:02.732465003 +0000 UTC m=+166.428976921" observedRunningTime="2025-12-15 05:40:03.167023374 +0000 UTC m=+166.863535291" watchObservedRunningTime="2025-12-15 05:40:03.171072967 +0000 UTC m=+166.867584884" Dec 15 05:40:07 crc kubenswrapper[4747]: I1215 05:40:07.754031 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-f66lm" Dec 15 05:40:07 crc kubenswrapper[4747]: I1215 05:40:07.754477 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-f66lm" Dec 15 05:40:07 crc kubenswrapper[4747]: I1215 05:40:07.843276 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-f66lm" Dec 15 05:40:07 crc kubenswrapper[4747]: I1215 05:40:07.884859 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6zrdj" Dec 15 05:40:07 crc kubenswrapper[4747]: I1215 05:40:07.885072 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6zrdj" Dec 15 05:40:07 crc kubenswrapper[4747]: I1215 05:40:07.913698 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6zrdj" Dec 15 05:40:08 crc kubenswrapper[4747]: I1215 05:40:08.132104 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lwqpd" Dec 15 05:40:08 crc kubenswrapper[4747]: I1215 05:40:08.132172 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lwqpd" Dec 15 05:40:08 crc kubenswrapper[4747]: I1215 05:40:08.136416 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6zrdj" Dec 15 05:40:08 crc kubenswrapper[4747]: I1215 05:40:08.139500 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-f66lm" Dec 15 05:40:08 crc kubenswrapper[4747]: I1215 05:40:08.171261 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lwqpd" Dec 15 05:40:08 crc kubenswrapper[4747]: I1215 05:40:08.277941 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jwzfq" Dec 15 05:40:08 crc kubenswrapper[4747]: I1215 05:40:08.277981 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jwzfq" Dec 15 05:40:08 crc kubenswrapper[4747]: I1215 05:40:08.309191 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jwzfq" Dec 15 05:40:09 crc kubenswrapper[4747]: I1215 05:40:09.151878 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lwqpd" Dec 15 05:40:09 crc kubenswrapper[4747]: I1215 05:40:09.153133 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jwzfq" Dec 15 05:40:09 crc kubenswrapper[4747]: I1215 05:40:09.671817 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gxjhc" Dec 15 05:40:09 crc kubenswrapper[4747]: I1215 05:40:09.671870 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gxjhc" Dec 15 05:40:09 crc kubenswrapper[4747]: I1215 05:40:09.726328 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gxjhc" Dec 15 05:40:09 crc kubenswrapper[4747]: I1215 05:40:09.940358 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-v472l"] Dec 15 05:40:10 crc kubenswrapper[4747]: I1215 05:40:10.084135 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-88ln6" Dec 15 05:40:10 crc kubenswrapper[4747]: I1215 05:40:10.084183 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-88ln6" Dec 15 05:40:10 crc kubenswrapper[4747]: I1215 05:40:10.131613 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-88ln6" Dec 15 05:40:10 crc kubenswrapper[4747]: I1215 05:40:10.181024 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gxjhc" Dec 15 05:40:10 crc kubenswrapper[4747]: I1215 05:40:10.195193 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-88ln6" Dec 15 05:40:10 crc kubenswrapper[4747]: I1215 05:40:10.249264 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lwqpd"] Dec 15 05:40:10 crc kubenswrapper[4747]: I1215 05:40:10.447030 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jwzfq"] Dec 15 05:40:10 crc kubenswrapper[4747]: I1215 05:40:10.898938 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hhfhz" Dec 15 05:40:10 crc kubenswrapper[4747]: I1215 05:40:10.898992 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hhfhz" Dec 15 05:40:10 crc kubenswrapper[4747]: I1215 05:40:10.940001 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hhfhz" Dec 15 05:40:11 crc kubenswrapper[4747]: I1215 05:40:11.119499 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jwzfq" podUID="95e7ce44-2981-4ea8-91a9-a9e897bdc80b" containerName="registry-server" containerID="cri-o://ef9cbd583d77724f6d9f9331c4d62112d0c9216606570fdedbb6ab940f58926b" gracePeriod=2 Dec 15 05:40:11 crc kubenswrapper[4747]: I1215 05:40:11.119640 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lwqpd" podUID="3a8d5b87-e7b5-491f-aee8-98aa02ba9a14" containerName="registry-server" containerID="cri-o://66266cf2daa9cad199afda7e01118b190bf4c155092796db134168d0a5415ba2" gracePeriod=2 Dec 15 05:40:11 crc kubenswrapper[4747]: I1215 05:40:11.155375 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hhfhz" Dec 15 05:40:11 crc kubenswrapper[4747]: I1215 05:40:11.381413 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ldqmm" Dec 15 05:40:11 crc kubenswrapper[4747]: I1215 05:40:11.382449 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ldqmm" Dec 15 05:40:11 crc kubenswrapper[4747]: I1215 05:40:11.423177 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ldqmm" Dec 15 05:40:11 crc kubenswrapper[4747]: I1215 05:40:11.489237 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lwqpd" Dec 15 05:40:11 crc kubenswrapper[4747]: I1215 05:40:11.494444 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jwzfq" Dec 15 05:40:11 crc kubenswrapper[4747]: I1215 05:40:11.522537 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qm5zz\" (UniqueName: \"kubernetes.io/projected/3a8d5b87-e7b5-491f-aee8-98aa02ba9a14-kube-api-access-qm5zz\") pod \"3a8d5b87-e7b5-491f-aee8-98aa02ba9a14\" (UID: \"3a8d5b87-e7b5-491f-aee8-98aa02ba9a14\") " Dec 15 05:40:11 crc kubenswrapper[4747]: I1215 05:40:11.522603 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a8d5b87-e7b5-491f-aee8-98aa02ba9a14-utilities\") pod \"3a8d5b87-e7b5-491f-aee8-98aa02ba9a14\" (UID: \"3a8d5b87-e7b5-491f-aee8-98aa02ba9a14\") " Dec 15 05:40:11 crc kubenswrapper[4747]: I1215 05:40:11.522677 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a8d5b87-e7b5-491f-aee8-98aa02ba9a14-catalog-content\") pod \"3a8d5b87-e7b5-491f-aee8-98aa02ba9a14\" (UID: \"3a8d5b87-e7b5-491f-aee8-98aa02ba9a14\") " Dec 15 05:40:11 crc kubenswrapper[4747]: I1215 05:40:11.522714 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95e7ce44-2981-4ea8-91a9-a9e897bdc80b-utilities\") pod \"95e7ce44-2981-4ea8-91a9-a9e897bdc80b\" (UID: \"95e7ce44-2981-4ea8-91a9-a9e897bdc80b\") " Dec 15 05:40:11 crc kubenswrapper[4747]: I1215 05:40:11.522749 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95e7ce44-2981-4ea8-91a9-a9e897bdc80b-catalog-content\") pod \"95e7ce44-2981-4ea8-91a9-a9e897bdc80b\" (UID: \"95e7ce44-2981-4ea8-91a9-a9e897bdc80b\") " Dec 15 05:40:11 crc kubenswrapper[4747]: I1215 05:40:11.522780 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9nwv\" (UniqueName: \"kubernetes.io/projected/95e7ce44-2981-4ea8-91a9-a9e897bdc80b-kube-api-access-z9nwv\") pod \"95e7ce44-2981-4ea8-91a9-a9e897bdc80b\" (UID: \"95e7ce44-2981-4ea8-91a9-a9e897bdc80b\") " Dec 15 05:40:11 crc kubenswrapper[4747]: I1215 05:40:11.524262 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95e7ce44-2981-4ea8-91a9-a9e897bdc80b-utilities" (OuterVolumeSpecName: "utilities") pod "95e7ce44-2981-4ea8-91a9-a9e897bdc80b" (UID: "95e7ce44-2981-4ea8-91a9-a9e897bdc80b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:40:11 crc kubenswrapper[4747]: I1215 05:40:11.524311 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a8d5b87-e7b5-491f-aee8-98aa02ba9a14-utilities" (OuterVolumeSpecName: "utilities") pod "3a8d5b87-e7b5-491f-aee8-98aa02ba9a14" (UID: "3a8d5b87-e7b5-491f-aee8-98aa02ba9a14"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:40:11 crc kubenswrapper[4747]: I1215 05:40:11.528875 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95e7ce44-2981-4ea8-91a9-a9e897bdc80b-kube-api-access-z9nwv" (OuterVolumeSpecName: "kube-api-access-z9nwv") pod "95e7ce44-2981-4ea8-91a9-a9e897bdc80b" (UID: "95e7ce44-2981-4ea8-91a9-a9e897bdc80b"). InnerVolumeSpecName "kube-api-access-z9nwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:40:11 crc kubenswrapper[4747]: I1215 05:40:11.531113 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a8d5b87-e7b5-491f-aee8-98aa02ba9a14-kube-api-access-qm5zz" (OuterVolumeSpecName: "kube-api-access-qm5zz") pod "3a8d5b87-e7b5-491f-aee8-98aa02ba9a14" (UID: "3a8d5b87-e7b5-491f-aee8-98aa02ba9a14"). InnerVolumeSpecName "kube-api-access-qm5zz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:40:11 crc kubenswrapper[4747]: I1215 05:40:11.571957 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a8d5b87-e7b5-491f-aee8-98aa02ba9a14-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3a8d5b87-e7b5-491f-aee8-98aa02ba9a14" (UID: "3a8d5b87-e7b5-491f-aee8-98aa02ba9a14"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:40:11 crc kubenswrapper[4747]: I1215 05:40:11.577234 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95e7ce44-2981-4ea8-91a9-a9e897bdc80b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "95e7ce44-2981-4ea8-91a9-a9e897bdc80b" (UID: "95e7ce44-2981-4ea8-91a9-a9e897bdc80b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:40:11 crc kubenswrapper[4747]: I1215 05:40:11.623958 4747 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95e7ce44-2981-4ea8-91a9-a9e897bdc80b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 15 05:40:11 crc kubenswrapper[4747]: I1215 05:40:11.623985 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9nwv\" (UniqueName: \"kubernetes.io/projected/95e7ce44-2981-4ea8-91a9-a9e897bdc80b-kube-api-access-z9nwv\") on node \"crc\" DevicePath \"\"" Dec 15 05:40:11 crc kubenswrapper[4747]: I1215 05:40:11.623999 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qm5zz\" (UniqueName: \"kubernetes.io/projected/3a8d5b87-e7b5-491f-aee8-98aa02ba9a14-kube-api-access-qm5zz\") on node \"crc\" DevicePath \"\"" Dec 15 05:40:11 crc kubenswrapper[4747]: I1215 05:40:11.624008 4747 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a8d5b87-e7b5-491f-aee8-98aa02ba9a14-utilities\") on node \"crc\" DevicePath \"\"" Dec 15 05:40:11 crc kubenswrapper[4747]: I1215 05:40:11.624016 4747 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a8d5b87-e7b5-491f-aee8-98aa02ba9a14-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 15 05:40:11 crc kubenswrapper[4747]: I1215 05:40:11.624024 4747 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95e7ce44-2981-4ea8-91a9-a9e897bdc80b-utilities\") on node \"crc\" DevicePath \"\"" Dec 15 05:40:11 crc kubenswrapper[4747]: I1215 05:40:11.867058 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mpvdj" Dec 15 05:40:12 crc kubenswrapper[4747]: I1215 05:40:12.127219 4747 generic.go:334] "Generic (PLEG): container finished" podID="3a8d5b87-e7b5-491f-aee8-98aa02ba9a14" containerID="66266cf2daa9cad199afda7e01118b190bf4c155092796db134168d0a5415ba2" exitCode=0 Dec 15 05:40:12 crc kubenswrapper[4747]: I1215 05:40:12.127308 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lwqpd" Dec 15 05:40:12 crc kubenswrapper[4747]: I1215 05:40:12.127332 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lwqpd" event={"ID":"3a8d5b87-e7b5-491f-aee8-98aa02ba9a14","Type":"ContainerDied","Data":"66266cf2daa9cad199afda7e01118b190bf4c155092796db134168d0a5415ba2"} Dec 15 05:40:12 crc kubenswrapper[4747]: I1215 05:40:12.127389 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lwqpd" event={"ID":"3a8d5b87-e7b5-491f-aee8-98aa02ba9a14","Type":"ContainerDied","Data":"b8a2d4290a3a26d50d86ad200533ee7973e7a284edcce6854fae4ac5cf347c1d"} Dec 15 05:40:12 crc kubenswrapper[4747]: I1215 05:40:12.127417 4747 scope.go:117] "RemoveContainer" containerID="66266cf2daa9cad199afda7e01118b190bf4c155092796db134168d0a5415ba2" Dec 15 05:40:12 crc kubenswrapper[4747]: I1215 05:40:12.130918 4747 generic.go:334] "Generic (PLEG): container finished" podID="95e7ce44-2981-4ea8-91a9-a9e897bdc80b" containerID="ef9cbd583d77724f6d9f9331c4d62112d0c9216606570fdedbb6ab940f58926b" exitCode=0 Dec 15 05:40:12 crc kubenswrapper[4747]: I1215 05:40:12.131273 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jwzfq" Dec 15 05:40:12 crc kubenswrapper[4747]: I1215 05:40:12.131734 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jwzfq" event={"ID":"95e7ce44-2981-4ea8-91a9-a9e897bdc80b","Type":"ContainerDied","Data":"ef9cbd583d77724f6d9f9331c4d62112d0c9216606570fdedbb6ab940f58926b"} Dec 15 05:40:12 crc kubenswrapper[4747]: I1215 05:40:12.132883 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jwzfq" event={"ID":"95e7ce44-2981-4ea8-91a9-a9e897bdc80b","Type":"ContainerDied","Data":"d9cdb100dec2812d3600393759d3058a20192e8ac0aa67bbba82dd47514dce3c"} Dec 15 05:40:12 crc kubenswrapper[4747]: I1215 05:40:12.144519 4747 scope.go:117] "RemoveContainer" containerID="77bfb5f5d9295fef2137ab15e8017773629c9a67caf39ffafa175f2fb8a87537" Dec 15 05:40:12 crc kubenswrapper[4747]: I1215 05:40:12.165266 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jwzfq"] Dec 15 05:40:12 crc kubenswrapper[4747]: I1215 05:40:12.168582 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jwzfq"] Dec 15 05:40:12 crc kubenswrapper[4747]: I1215 05:40:12.174161 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ldqmm" Dec 15 05:40:12 crc kubenswrapper[4747]: I1215 05:40:12.174576 4747 scope.go:117] "RemoveContainer" containerID="828e9725304b4465b0f9e2b6c3649417bfe1bc9124bcb8a05b8c2b423c41e51d" Dec 15 05:40:12 crc kubenswrapper[4747]: I1215 05:40:12.176938 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lwqpd"] Dec 15 05:40:12 crc kubenswrapper[4747]: I1215 05:40:12.179804 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lwqpd"] Dec 15 05:40:12 crc kubenswrapper[4747]: I1215 05:40:12.187776 4747 scope.go:117] "RemoveContainer" containerID="66266cf2daa9cad199afda7e01118b190bf4c155092796db134168d0a5415ba2" Dec 15 05:40:12 crc kubenswrapper[4747]: E1215 05:40:12.196970 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66266cf2daa9cad199afda7e01118b190bf4c155092796db134168d0a5415ba2\": container with ID starting with 66266cf2daa9cad199afda7e01118b190bf4c155092796db134168d0a5415ba2 not found: ID does not exist" containerID="66266cf2daa9cad199afda7e01118b190bf4c155092796db134168d0a5415ba2" Dec 15 05:40:12 crc kubenswrapper[4747]: I1215 05:40:12.197045 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66266cf2daa9cad199afda7e01118b190bf4c155092796db134168d0a5415ba2"} err="failed to get container status \"66266cf2daa9cad199afda7e01118b190bf4c155092796db134168d0a5415ba2\": rpc error: code = NotFound desc = could not find container \"66266cf2daa9cad199afda7e01118b190bf4c155092796db134168d0a5415ba2\": container with ID starting with 66266cf2daa9cad199afda7e01118b190bf4c155092796db134168d0a5415ba2 not found: ID does not exist" Dec 15 05:40:12 crc kubenswrapper[4747]: I1215 05:40:12.197646 4747 scope.go:117] "RemoveContainer" containerID="77bfb5f5d9295fef2137ab15e8017773629c9a67caf39ffafa175f2fb8a87537" Dec 15 05:40:12 crc kubenswrapper[4747]: E1215 05:40:12.198067 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77bfb5f5d9295fef2137ab15e8017773629c9a67caf39ffafa175f2fb8a87537\": container with ID starting with 77bfb5f5d9295fef2137ab15e8017773629c9a67caf39ffafa175f2fb8a87537 not found: ID does not exist" containerID="77bfb5f5d9295fef2137ab15e8017773629c9a67caf39ffafa175f2fb8a87537" Dec 15 05:40:12 crc kubenswrapper[4747]: I1215 05:40:12.198103 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77bfb5f5d9295fef2137ab15e8017773629c9a67caf39ffafa175f2fb8a87537"} err="failed to get container status \"77bfb5f5d9295fef2137ab15e8017773629c9a67caf39ffafa175f2fb8a87537\": rpc error: code = NotFound desc = could not find container \"77bfb5f5d9295fef2137ab15e8017773629c9a67caf39ffafa175f2fb8a87537\": container with ID starting with 77bfb5f5d9295fef2137ab15e8017773629c9a67caf39ffafa175f2fb8a87537 not found: ID does not exist" Dec 15 05:40:12 crc kubenswrapper[4747]: I1215 05:40:12.198138 4747 scope.go:117] "RemoveContainer" containerID="828e9725304b4465b0f9e2b6c3649417bfe1bc9124bcb8a05b8c2b423c41e51d" Dec 15 05:40:12 crc kubenswrapper[4747]: E1215 05:40:12.198495 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"828e9725304b4465b0f9e2b6c3649417bfe1bc9124bcb8a05b8c2b423c41e51d\": container with ID starting with 828e9725304b4465b0f9e2b6c3649417bfe1bc9124bcb8a05b8c2b423c41e51d not found: ID does not exist" containerID="828e9725304b4465b0f9e2b6c3649417bfe1bc9124bcb8a05b8c2b423c41e51d" Dec 15 05:40:12 crc kubenswrapper[4747]: I1215 05:40:12.198532 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"828e9725304b4465b0f9e2b6c3649417bfe1bc9124bcb8a05b8c2b423c41e51d"} err="failed to get container status \"828e9725304b4465b0f9e2b6c3649417bfe1bc9124bcb8a05b8c2b423c41e51d\": rpc error: code = NotFound desc = could not find container \"828e9725304b4465b0f9e2b6c3649417bfe1bc9124bcb8a05b8c2b423c41e51d\": container with ID starting with 828e9725304b4465b0f9e2b6c3649417bfe1bc9124bcb8a05b8c2b423c41e51d not found: ID does not exist" Dec 15 05:40:12 crc kubenswrapper[4747]: I1215 05:40:12.198571 4747 scope.go:117] "RemoveContainer" containerID="ef9cbd583d77724f6d9f9331c4d62112d0c9216606570fdedbb6ab940f58926b" Dec 15 05:40:12 crc kubenswrapper[4747]: I1215 05:40:12.209522 4747 scope.go:117] "RemoveContainer" containerID="ebbd128389ddc261843cdb2409d0c92ec0e597b7d6ef8046a3ada546cee18fee" Dec 15 05:40:12 crc kubenswrapper[4747]: I1215 05:40:12.222338 4747 scope.go:117] "RemoveContainer" containerID="21a5db804ab466002fc353f26e06d1dd2b6bca6aae267d5e87ab4dc3095a05ee" Dec 15 05:40:12 crc kubenswrapper[4747]: I1215 05:40:12.232759 4747 scope.go:117] "RemoveContainer" containerID="ef9cbd583d77724f6d9f9331c4d62112d0c9216606570fdedbb6ab940f58926b" Dec 15 05:40:12 crc kubenswrapper[4747]: E1215 05:40:12.233202 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef9cbd583d77724f6d9f9331c4d62112d0c9216606570fdedbb6ab940f58926b\": container with ID starting with ef9cbd583d77724f6d9f9331c4d62112d0c9216606570fdedbb6ab940f58926b not found: ID does not exist" containerID="ef9cbd583d77724f6d9f9331c4d62112d0c9216606570fdedbb6ab940f58926b" Dec 15 05:40:12 crc kubenswrapper[4747]: I1215 05:40:12.233235 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef9cbd583d77724f6d9f9331c4d62112d0c9216606570fdedbb6ab940f58926b"} err="failed to get container status \"ef9cbd583d77724f6d9f9331c4d62112d0c9216606570fdedbb6ab940f58926b\": rpc error: code = NotFound desc = could not find container \"ef9cbd583d77724f6d9f9331c4d62112d0c9216606570fdedbb6ab940f58926b\": container with ID starting with ef9cbd583d77724f6d9f9331c4d62112d0c9216606570fdedbb6ab940f58926b not found: ID does not exist" Dec 15 05:40:12 crc kubenswrapper[4747]: I1215 05:40:12.233253 4747 scope.go:117] "RemoveContainer" containerID="ebbd128389ddc261843cdb2409d0c92ec0e597b7d6ef8046a3ada546cee18fee" Dec 15 05:40:12 crc kubenswrapper[4747]: E1215 05:40:12.233527 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebbd128389ddc261843cdb2409d0c92ec0e597b7d6ef8046a3ada546cee18fee\": container with ID starting with ebbd128389ddc261843cdb2409d0c92ec0e597b7d6ef8046a3ada546cee18fee not found: ID does not exist" containerID="ebbd128389ddc261843cdb2409d0c92ec0e597b7d6ef8046a3ada546cee18fee" Dec 15 05:40:12 crc kubenswrapper[4747]: I1215 05:40:12.233560 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebbd128389ddc261843cdb2409d0c92ec0e597b7d6ef8046a3ada546cee18fee"} err="failed to get container status \"ebbd128389ddc261843cdb2409d0c92ec0e597b7d6ef8046a3ada546cee18fee\": rpc error: code = NotFound desc = could not find container \"ebbd128389ddc261843cdb2409d0c92ec0e597b7d6ef8046a3ada546cee18fee\": container with ID starting with ebbd128389ddc261843cdb2409d0c92ec0e597b7d6ef8046a3ada546cee18fee not found: ID does not exist" Dec 15 05:40:12 crc kubenswrapper[4747]: I1215 05:40:12.233587 4747 scope.go:117] "RemoveContainer" containerID="21a5db804ab466002fc353f26e06d1dd2b6bca6aae267d5e87ab4dc3095a05ee" Dec 15 05:40:12 crc kubenswrapper[4747]: E1215 05:40:12.233860 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21a5db804ab466002fc353f26e06d1dd2b6bca6aae267d5e87ab4dc3095a05ee\": container with ID starting with 21a5db804ab466002fc353f26e06d1dd2b6bca6aae267d5e87ab4dc3095a05ee not found: ID does not exist" containerID="21a5db804ab466002fc353f26e06d1dd2b6bca6aae267d5e87ab4dc3095a05ee" Dec 15 05:40:12 crc kubenswrapper[4747]: I1215 05:40:12.233884 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21a5db804ab466002fc353f26e06d1dd2b6bca6aae267d5e87ab4dc3095a05ee"} err="failed to get container status \"21a5db804ab466002fc353f26e06d1dd2b6bca6aae267d5e87ab4dc3095a05ee\": rpc error: code = NotFound desc = could not find container \"21a5db804ab466002fc353f26e06d1dd2b6bca6aae267d5e87ab4dc3095a05ee\": container with ID starting with 21a5db804ab466002fc353f26e06d1dd2b6bca6aae267d5e87ab4dc3095a05ee not found: ID does not exist" Dec 15 05:40:12 crc kubenswrapper[4747]: I1215 05:40:12.637870 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a8d5b87-e7b5-491f-aee8-98aa02ba9a14" path="/var/lib/kubelet/pods/3a8d5b87-e7b5-491f-aee8-98aa02ba9a14/volumes" Dec 15 05:40:12 crc kubenswrapper[4747]: I1215 05:40:12.638539 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95e7ce44-2981-4ea8-91a9-a9e897bdc80b" path="/var/lib/kubelet/pods/95e7ce44-2981-4ea8-91a9-a9e897bdc80b/volumes" Dec 15 05:40:12 crc kubenswrapper[4747]: I1215 05:40:12.651574 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-88ln6"] Dec 15 05:40:12 crc kubenswrapper[4747]: I1215 05:40:12.652809 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-88ln6" podUID="7f997591-b82e-4c3b-85b9-5106a8168eec" containerName="registry-server" containerID="cri-o://962a68a146298e491c0a7d9eac0fb8dbac9686955ba64532efeabfcd4122db2d" gracePeriod=2 Dec 15 05:40:12 crc kubenswrapper[4747]: I1215 05:40:12.981408 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-88ln6" Dec 15 05:40:13 crc kubenswrapper[4747]: I1215 05:40:13.038415 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f997591-b82e-4c3b-85b9-5106a8168eec-catalog-content\") pod \"7f997591-b82e-4c3b-85b9-5106a8168eec\" (UID: \"7f997591-b82e-4c3b-85b9-5106a8168eec\") " Dec 15 05:40:13 crc kubenswrapper[4747]: I1215 05:40:13.038471 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9d8j\" (UniqueName: \"kubernetes.io/projected/7f997591-b82e-4c3b-85b9-5106a8168eec-kube-api-access-t9d8j\") pod \"7f997591-b82e-4c3b-85b9-5106a8168eec\" (UID: \"7f997591-b82e-4c3b-85b9-5106a8168eec\") " Dec 15 05:40:13 crc kubenswrapper[4747]: I1215 05:40:13.038523 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f997591-b82e-4c3b-85b9-5106a8168eec-utilities\") pod \"7f997591-b82e-4c3b-85b9-5106a8168eec\" (UID: \"7f997591-b82e-4c3b-85b9-5106a8168eec\") " Dec 15 05:40:13 crc kubenswrapper[4747]: I1215 05:40:13.039336 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f997591-b82e-4c3b-85b9-5106a8168eec-utilities" (OuterVolumeSpecName: "utilities") pod "7f997591-b82e-4c3b-85b9-5106a8168eec" (UID: "7f997591-b82e-4c3b-85b9-5106a8168eec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:40:13 crc kubenswrapper[4747]: I1215 05:40:13.044184 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f997591-b82e-4c3b-85b9-5106a8168eec-kube-api-access-t9d8j" (OuterVolumeSpecName: "kube-api-access-t9d8j") pod "7f997591-b82e-4c3b-85b9-5106a8168eec" (UID: "7f997591-b82e-4c3b-85b9-5106a8168eec"). InnerVolumeSpecName "kube-api-access-t9d8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:40:13 crc kubenswrapper[4747]: I1215 05:40:13.059128 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f997591-b82e-4c3b-85b9-5106a8168eec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7f997591-b82e-4c3b-85b9-5106a8168eec" (UID: "7f997591-b82e-4c3b-85b9-5106a8168eec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:40:13 crc kubenswrapper[4747]: I1215 05:40:13.139458 4747 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f997591-b82e-4c3b-85b9-5106a8168eec-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 15 05:40:13 crc kubenswrapper[4747]: I1215 05:40:13.139487 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9d8j\" (UniqueName: \"kubernetes.io/projected/7f997591-b82e-4c3b-85b9-5106a8168eec-kube-api-access-t9d8j\") on node \"crc\" DevicePath \"\"" Dec 15 05:40:13 crc kubenswrapper[4747]: I1215 05:40:13.139500 4747 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f997591-b82e-4c3b-85b9-5106a8168eec-utilities\") on node \"crc\" DevicePath \"\"" Dec 15 05:40:13 crc kubenswrapper[4747]: I1215 05:40:13.141066 4747 generic.go:334] "Generic (PLEG): container finished" podID="7f997591-b82e-4c3b-85b9-5106a8168eec" containerID="962a68a146298e491c0a7d9eac0fb8dbac9686955ba64532efeabfcd4122db2d" exitCode=0 Dec 15 05:40:13 crc kubenswrapper[4747]: I1215 05:40:13.141138 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-88ln6" Dec 15 05:40:13 crc kubenswrapper[4747]: I1215 05:40:13.141135 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-88ln6" event={"ID":"7f997591-b82e-4c3b-85b9-5106a8168eec","Type":"ContainerDied","Data":"962a68a146298e491c0a7d9eac0fb8dbac9686955ba64532efeabfcd4122db2d"} Dec 15 05:40:13 crc kubenswrapper[4747]: I1215 05:40:13.141184 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-88ln6" event={"ID":"7f997591-b82e-4c3b-85b9-5106a8168eec","Type":"ContainerDied","Data":"4fd8a16e0e18f7b4fcb01c1da294e1409810717f403bbaa3ac7660d68e7e47c8"} Dec 15 05:40:13 crc kubenswrapper[4747]: I1215 05:40:13.141210 4747 scope.go:117] "RemoveContainer" containerID="962a68a146298e491c0a7d9eac0fb8dbac9686955ba64532efeabfcd4122db2d" Dec 15 05:40:13 crc kubenswrapper[4747]: I1215 05:40:13.161008 4747 scope.go:117] "RemoveContainer" containerID="6e2eb7d06eff800e7b7b4a705a8f1a6f0371e0a79dae3f16aba19c75d23ab940" Dec 15 05:40:13 crc kubenswrapper[4747]: I1215 05:40:13.164714 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-88ln6"] Dec 15 05:40:13 crc kubenswrapper[4747]: I1215 05:40:13.168670 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-88ln6"] Dec 15 05:40:13 crc kubenswrapper[4747]: I1215 05:40:13.193708 4747 scope.go:117] "RemoveContainer" containerID="bf3f35cf634b95a3a93dacd5fcbb75f407bcb1e74c4d08d06fffd6adc421372c" Dec 15 05:40:13 crc kubenswrapper[4747]: I1215 05:40:13.205176 4747 scope.go:117] "RemoveContainer" containerID="962a68a146298e491c0a7d9eac0fb8dbac9686955ba64532efeabfcd4122db2d" Dec 15 05:40:13 crc kubenswrapper[4747]: E1215 05:40:13.205511 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"962a68a146298e491c0a7d9eac0fb8dbac9686955ba64532efeabfcd4122db2d\": container with ID starting with 962a68a146298e491c0a7d9eac0fb8dbac9686955ba64532efeabfcd4122db2d not found: ID does not exist" containerID="962a68a146298e491c0a7d9eac0fb8dbac9686955ba64532efeabfcd4122db2d" Dec 15 05:40:13 crc kubenswrapper[4747]: I1215 05:40:13.205545 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"962a68a146298e491c0a7d9eac0fb8dbac9686955ba64532efeabfcd4122db2d"} err="failed to get container status \"962a68a146298e491c0a7d9eac0fb8dbac9686955ba64532efeabfcd4122db2d\": rpc error: code = NotFound desc = could not find container \"962a68a146298e491c0a7d9eac0fb8dbac9686955ba64532efeabfcd4122db2d\": container with ID starting with 962a68a146298e491c0a7d9eac0fb8dbac9686955ba64532efeabfcd4122db2d not found: ID does not exist" Dec 15 05:40:13 crc kubenswrapper[4747]: I1215 05:40:13.205566 4747 scope.go:117] "RemoveContainer" containerID="6e2eb7d06eff800e7b7b4a705a8f1a6f0371e0a79dae3f16aba19c75d23ab940" Dec 15 05:40:13 crc kubenswrapper[4747]: E1215 05:40:13.206000 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e2eb7d06eff800e7b7b4a705a8f1a6f0371e0a79dae3f16aba19c75d23ab940\": container with ID starting with 6e2eb7d06eff800e7b7b4a705a8f1a6f0371e0a79dae3f16aba19c75d23ab940 not found: ID does not exist" containerID="6e2eb7d06eff800e7b7b4a705a8f1a6f0371e0a79dae3f16aba19c75d23ab940" Dec 15 05:40:13 crc kubenswrapper[4747]: I1215 05:40:13.206023 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e2eb7d06eff800e7b7b4a705a8f1a6f0371e0a79dae3f16aba19c75d23ab940"} err="failed to get container status \"6e2eb7d06eff800e7b7b4a705a8f1a6f0371e0a79dae3f16aba19c75d23ab940\": rpc error: code = NotFound desc = could not find container \"6e2eb7d06eff800e7b7b4a705a8f1a6f0371e0a79dae3f16aba19c75d23ab940\": container with ID starting with 6e2eb7d06eff800e7b7b4a705a8f1a6f0371e0a79dae3f16aba19c75d23ab940 not found: ID does not exist" Dec 15 05:40:13 crc kubenswrapper[4747]: I1215 05:40:13.206045 4747 scope.go:117] "RemoveContainer" containerID="bf3f35cf634b95a3a93dacd5fcbb75f407bcb1e74c4d08d06fffd6adc421372c" Dec 15 05:40:13 crc kubenswrapper[4747]: E1215 05:40:13.206386 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf3f35cf634b95a3a93dacd5fcbb75f407bcb1e74c4d08d06fffd6adc421372c\": container with ID starting with bf3f35cf634b95a3a93dacd5fcbb75f407bcb1e74c4d08d06fffd6adc421372c not found: ID does not exist" containerID="bf3f35cf634b95a3a93dacd5fcbb75f407bcb1e74c4d08d06fffd6adc421372c" Dec 15 05:40:13 crc kubenswrapper[4747]: I1215 05:40:13.206407 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf3f35cf634b95a3a93dacd5fcbb75f407bcb1e74c4d08d06fffd6adc421372c"} err="failed to get container status \"bf3f35cf634b95a3a93dacd5fcbb75f407bcb1e74c4d08d06fffd6adc421372c\": rpc error: code = NotFound desc = could not find container \"bf3f35cf634b95a3a93dacd5fcbb75f407bcb1e74c4d08d06fffd6adc421372c\": container with ID starting with bf3f35cf634b95a3a93dacd5fcbb75f407bcb1e74c4d08d06fffd6adc421372c not found: ID does not exist" Dec 15 05:40:14 crc kubenswrapper[4747]: I1215 05:40:14.636676 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f997591-b82e-4c3b-85b9-5106a8168eec" path="/var/lib/kubelet/pods/7f997591-b82e-4c3b-85b9-5106a8168eec/volumes" Dec 15 05:40:15 crc kubenswrapper[4747]: I1215 05:40:15.049167 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ldqmm"] Dec 15 05:40:15 crc kubenswrapper[4747]: I1215 05:40:15.153440 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ldqmm" podUID="5360354e-e2a9-4bf5-bc74-e1b778b512f5" containerName="registry-server" containerID="cri-o://3837e4714103e093822a038469e0d44673a8ed182a859da4688244a3fb067e1f" gracePeriod=2 Dec 15 05:40:15 crc kubenswrapper[4747]: I1215 05:40:15.464015 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ldqmm" Dec 15 05:40:15 crc kubenswrapper[4747]: I1215 05:40:15.567180 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5360354e-e2a9-4bf5-bc74-e1b778b512f5-utilities\") pod \"5360354e-e2a9-4bf5-bc74-e1b778b512f5\" (UID: \"5360354e-e2a9-4bf5-bc74-e1b778b512f5\") " Dec 15 05:40:15 crc kubenswrapper[4747]: I1215 05:40:15.567235 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6ffp\" (UniqueName: \"kubernetes.io/projected/5360354e-e2a9-4bf5-bc74-e1b778b512f5-kube-api-access-b6ffp\") pod \"5360354e-e2a9-4bf5-bc74-e1b778b512f5\" (UID: \"5360354e-e2a9-4bf5-bc74-e1b778b512f5\") " Dec 15 05:40:15 crc kubenswrapper[4747]: I1215 05:40:15.567271 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5360354e-e2a9-4bf5-bc74-e1b778b512f5-catalog-content\") pod \"5360354e-e2a9-4bf5-bc74-e1b778b512f5\" (UID: \"5360354e-e2a9-4bf5-bc74-e1b778b512f5\") " Dec 15 05:40:15 crc kubenswrapper[4747]: I1215 05:40:15.568844 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5360354e-e2a9-4bf5-bc74-e1b778b512f5-utilities" (OuterVolumeSpecName: "utilities") pod "5360354e-e2a9-4bf5-bc74-e1b778b512f5" (UID: "5360354e-e2a9-4bf5-bc74-e1b778b512f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:40:15 crc kubenswrapper[4747]: I1215 05:40:15.579435 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5360354e-e2a9-4bf5-bc74-e1b778b512f5-kube-api-access-b6ffp" (OuterVolumeSpecName: "kube-api-access-b6ffp") pod "5360354e-e2a9-4bf5-bc74-e1b778b512f5" (UID: "5360354e-e2a9-4bf5-bc74-e1b778b512f5"). InnerVolumeSpecName "kube-api-access-b6ffp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:40:15 crc kubenswrapper[4747]: I1215 05:40:15.669644 4747 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5360354e-e2a9-4bf5-bc74-e1b778b512f5-utilities\") on node \"crc\" DevicePath \"\"" Dec 15 05:40:15 crc kubenswrapper[4747]: I1215 05:40:15.669681 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6ffp\" (UniqueName: \"kubernetes.io/projected/5360354e-e2a9-4bf5-bc74-e1b778b512f5-kube-api-access-b6ffp\") on node \"crc\" DevicePath \"\"" Dec 15 05:40:15 crc kubenswrapper[4747]: I1215 05:40:15.674859 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5360354e-e2a9-4bf5-bc74-e1b778b512f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5360354e-e2a9-4bf5-bc74-e1b778b512f5" (UID: "5360354e-e2a9-4bf5-bc74-e1b778b512f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:40:15 crc kubenswrapper[4747]: I1215 05:40:15.770467 4747 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5360354e-e2a9-4bf5-bc74-e1b778b512f5-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 15 05:40:16 crc kubenswrapper[4747]: I1215 05:40:16.162528 4747 generic.go:334] "Generic (PLEG): container finished" podID="5360354e-e2a9-4bf5-bc74-e1b778b512f5" containerID="3837e4714103e093822a038469e0d44673a8ed182a859da4688244a3fb067e1f" exitCode=0 Dec 15 05:40:16 crc kubenswrapper[4747]: I1215 05:40:16.162589 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ldqmm" event={"ID":"5360354e-e2a9-4bf5-bc74-e1b778b512f5","Type":"ContainerDied","Data":"3837e4714103e093822a038469e0d44673a8ed182a859da4688244a3fb067e1f"} Dec 15 05:40:16 crc kubenswrapper[4747]: I1215 05:40:16.162638 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ldqmm" event={"ID":"5360354e-e2a9-4bf5-bc74-e1b778b512f5","Type":"ContainerDied","Data":"b569fe85f9cb897be36614cb2059569b0bcafad07a4a07465542bcb702423ece"} Dec 15 05:40:16 crc kubenswrapper[4747]: I1215 05:40:16.162663 4747 scope.go:117] "RemoveContainer" containerID="3837e4714103e093822a038469e0d44673a8ed182a859da4688244a3fb067e1f" Dec 15 05:40:16 crc kubenswrapper[4747]: I1215 05:40:16.162846 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ldqmm" Dec 15 05:40:16 crc kubenswrapper[4747]: I1215 05:40:16.177596 4747 scope.go:117] "RemoveContainer" containerID="9a52824445169a3b9f679187633ed60ce16cd990d7f4ffa9462892fe305b38fe" Dec 15 05:40:16 crc kubenswrapper[4747]: I1215 05:40:16.190453 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ldqmm"] Dec 15 05:40:16 crc kubenswrapper[4747]: I1215 05:40:16.195106 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ldqmm"] Dec 15 05:40:16 crc kubenswrapper[4747]: I1215 05:40:16.212447 4747 scope.go:117] "RemoveContainer" containerID="4b212954a1ca6b472e1ff65ccbcdb0b5977e344ecc276616a172293ccf4914fd" Dec 15 05:40:16 crc kubenswrapper[4747]: I1215 05:40:16.226334 4747 scope.go:117] "RemoveContainer" containerID="3837e4714103e093822a038469e0d44673a8ed182a859da4688244a3fb067e1f" Dec 15 05:40:16 crc kubenswrapper[4747]: E1215 05:40:16.226690 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3837e4714103e093822a038469e0d44673a8ed182a859da4688244a3fb067e1f\": container with ID starting with 3837e4714103e093822a038469e0d44673a8ed182a859da4688244a3fb067e1f not found: ID does not exist" containerID="3837e4714103e093822a038469e0d44673a8ed182a859da4688244a3fb067e1f" Dec 15 05:40:16 crc kubenswrapper[4747]: I1215 05:40:16.226811 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3837e4714103e093822a038469e0d44673a8ed182a859da4688244a3fb067e1f"} err="failed to get container status \"3837e4714103e093822a038469e0d44673a8ed182a859da4688244a3fb067e1f\": rpc error: code = NotFound desc = could not find container \"3837e4714103e093822a038469e0d44673a8ed182a859da4688244a3fb067e1f\": container with ID starting with 3837e4714103e093822a038469e0d44673a8ed182a859da4688244a3fb067e1f not found: ID does not exist" Dec 15 05:40:16 crc kubenswrapper[4747]: I1215 05:40:16.226900 4747 scope.go:117] "RemoveContainer" containerID="9a52824445169a3b9f679187633ed60ce16cd990d7f4ffa9462892fe305b38fe" Dec 15 05:40:16 crc kubenswrapper[4747]: E1215 05:40:16.227506 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a52824445169a3b9f679187633ed60ce16cd990d7f4ffa9462892fe305b38fe\": container with ID starting with 9a52824445169a3b9f679187633ed60ce16cd990d7f4ffa9462892fe305b38fe not found: ID does not exist" containerID="9a52824445169a3b9f679187633ed60ce16cd990d7f4ffa9462892fe305b38fe" Dec 15 05:40:16 crc kubenswrapper[4747]: I1215 05:40:16.227548 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a52824445169a3b9f679187633ed60ce16cd990d7f4ffa9462892fe305b38fe"} err="failed to get container status \"9a52824445169a3b9f679187633ed60ce16cd990d7f4ffa9462892fe305b38fe\": rpc error: code = NotFound desc = could not find container \"9a52824445169a3b9f679187633ed60ce16cd990d7f4ffa9462892fe305b38fe\": container with ID starting with 9a52824445169a3b9f679187633ed60ce16cd990d7f4ffa9462892fe305b38fe not found: ID does not exist" Dec 15 05:40:16 crc kubenswrapper[4747]: I1215 05:40:16.227577 4747 scope.go:117] "RemoveContainer" containerID="4b212954a1ca6b472e1ff65ccbcdb0b5977e344ecc276616a172293ccf4914fd" Dec 15 05:40:16 crc kubenswrapper[4747]: E1215 05:40:16.227893 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b212954a1ca6b472e1ff65ccbcdb0b5977e344ecc276616a172293ccf4914fd\": container with ID starting with 4b212954a1ca6b472e1ff65ccbcdb0b5977e344ecc276616a172293ccf4914fd not found: ID does not exist" containerID="4b212954a1ca6b472e1ff65ccbcdb0b5977e344ecc276616a172293ccf4914fd" Dec 15 05:40:16 crc kubenswrapper[4747]: I1215 05:40:16.228001 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b212954a1ca6b472e1ff65ccbcdb0b5977e344ecc276616a172293ccf4914fd"} err="failed to get container status \"4b212954a1ca6b472e1ff65ccbcdb0b5977e344ecc276616a172293ccf4914fd\": rpc error: code = NotFound desc = could not find container \"4b212954a1ca6b472e1ff65ccbcdb0b5977e344ecc276616a172293ccf4914fd\": container with ID starting with 4b212954a1ca6b472e1ff65ccbcdb0b5977e344ecc276616a172293ccf4914fd not found: ID does not exist" Dec 15 05:40:16 crc kubenswrapper[4747]: I1215 05:40:16.635880 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5360354e-e2a9-4bf5-bc74-e1b778b512f5" path="/var/lib/kubelet/pods/5360354e-e2a9-4bf5-bc74-e1b778b512f5/volumes" Dec 15 05:40:19 crc kubenswrapper[4747]: I1215 05:40:19.894866 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 15 05:40:19 crc kubenswrapper[4747]: E1215 05:40:19.895642 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f997591-b82e-4c3b-85b9-5106a8168eec" containerName="extract-utilities" Dec 15 05:40:19 crc kubenswrapper[4747]: I1215 05:40:19.895666 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f997591-b82e-4c3b-85b9-5106a8168eec" containerName="extract-utilities" Dec 15 05:40:19 crc kubenswrapper[4747]: E1215 05:40:19.895679 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f997591-b82e-4c3b-85b9-5106a8168eec" containerName="registry-server" Dec 15 05:40:19 crc kubenswrapper[4747]: I1215 05:40:19.895684 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f997591-b82e-4c3b-85b9-5106a8168eec" containerName="registry-server" Dec 15 05:40:19 crc kubenswrapper[4747]: E1215 05:40:19.895694 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5360354e-e2a9-4bf5-bc74-e1b778b512f5" containerName="registry-server" Dec 15 05:40:19 crc kubenswrapper[4747]: I1215 05:40:19.895700 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="5360354e-e2a9-4bf5-bc74-e1b778b512f5" containerName="registry-server" Dec 15 05:40:19 crc kubenswrapper[4747]: E1215 05:40:19.895712 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95e7ce44-2981-4ea8-91a9-a9e897bdc80b" containerName="registry-server" Dec 15 05:40:19 crc kubenswrapper[4747]: I1215 05:40:19.895718 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="95e7ce44-2981-4ea8-91a9-a9e897bdc80b" containerName="registry-server" Dec 15 05:40:19 crc kubenswrapper[4747]: E1215 05:40:19.895729 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a8d5b87-e7b5-491f-aee8-98aa02ba9a14" containerName="registry-server" Dec 15 05:40:19 crc kubenswrapper[4747]: I1215 05:40:19.895734 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a8d5b87-e7b5-491f-aee8-98aa02ba9a14" containerName="registry-server" Dec 15 05:40:19 crc kubenswrapper[4747]: E1215 05:40:19.895743 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5360354e-e2a9-4bf5-bc74-e1b778b512f5" containerName="extract-content" Dec 15 05:40:19 crc kubenswrapper[4747]: I1215 05:40:19.895749 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="5360354e-e2a9-4bf5-bc74-e1b778b512f5" containerName="extract-content" Dec 15 05:40:19 crc kubenswrapper[4747]: E1215 05:40:19.895756 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="166f8882-1da0-434d-9b5f-79a43223e9fb" containerName="pruner" Dec 15 05:40:19 crc kubenswrapper[4747]: I1215 05:40:19.895762 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="166f8882-1da0-434d-9b5f-79a43223e9fb" containerName="pruner" Dec 15 05:40:19 crc kubenswrapper[4747]: E1215 05:40:19.895768 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95e7ce44-2981-4ea8-91a9-a9e897bdc80b" containerName="extract-content" Dec 15 05:40:19 crc kubenswrapper[4747]: I1215 05:40:19.895774 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="95e7ce44-2981-4ea8-91a9-a9e897bdc80b" containerName="extract-content" Dec 15 05:40:19 crc kubenswrapper[4747]: E1215 05:40:19.895782 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a8d5b87-e7b5-491f-aee8-98aa02ba9a14" containerName="extract-content" Dec 15 05:40:19 crc kubenswrapper[4747]: I1215 05:40:19.895787 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a8d5b87-e7b5-491f-aee8-98aa02ba9a14" containerName="extract-content" Dec 15 05:40:19 crc kubenswrapper[4747]: E1215 05:40:19.895793 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a8d5b87-e7b5-491f-aee8-98aa02ba9a14" containerName="extract-utilities" Dec 15 05:40:19 crc kubenswrapper[4747]: I1215 05:40:19.895801 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a8d5b87-e7b5-491f-aee8-98aa02ba9a14" containerName="extract-utilities" Dec 15 05:40:19 crc kubenswrapper[4747]: E1215 05:40:19.895812 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5360354e-e2a9-4bf5-bc74-e1b778b512f5" containerName="extract-utilities" Dec 15 05:40:19 crc kubenswrapper[4747]: I1215 05:40:19.895818 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="5360354e-e2a9-4bf5-bc74-e1b778b512f5" containerName="extract-utilities" Dec 15 05:40:19 crc kubenswrapper[4747]: E1215 05:40:19.895828 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f997591-b82e-4c3b-85b9-5106a8168eec" containerName="extract-content" Dec 15 05:40:19 crc kubenswrapper[4747]: I1215 05:40:19.895834 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f997591-b82e-4c3b-85b9-5106a8168eec" containerName="extract-content" Dec 15 05:40:19 crc kubenswrapper[4747]: E1215 05:40:19.895842 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97e304e6-0512-4367-9811-e41ccac42926" containerName="pruner" Dec 15 05:40:19 crc kubenswrapper[4747]: I1215 05:40:19.895848 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="97e304e6-0512-4367-9811-e41ccac42926" containerName="pruner" Dec 15 05:40:19 crc kubenswrapper[4747]: E1215 05:40:19.895858 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95e7ce44-2981-4ea8-91a9-a9e897bdc80b" containerName="extract-utilities" Dec 15 05:40:19 crc kubenswrapper[4747]: I1215 05:40:19.895864 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="95e7ce44-2981-4ea8-91a9-a9e897bdc80b" containerName="extract-utilities" Dec 15 05:40:19 crc kubenswrapper[4747]: I1215 05:40:19.896033 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="97e304e6-0512-4367-9811-e41ccac42926" containerName="pruner" Dec 15 05:40:19 crc kubenswrapper[4747]: I1215 05:40:19.896047 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="166f8882-1da0-434d-9b5f-79a43223e9fb" containerName="pruner" Dec 15 05:40:19 crc kubenswrapper[4747]: I1215 05:40:19.896055 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="95e7ce44-2981-4ea8-91a9-a9e897bdc80b" containerName="registry-server" Dec 15 05:40:19 crc kubenswrapper[4747]: I1215 05:40:19.896072 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="5360354e-e2a9-4bf5-bc74-e1b778b512f5" containerName="registry-server" Dec 15 05:40:19 crc kubenswrapper[4747]: I1215 05:40:19.896081 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a8d5b87-e7b5-491f-aee8-98aa02ba9a14" containerName="registry-server" Dec 15 05:40:19 crc kubenswrapper[4747]: I1215 05:40:19.896088 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f997591-b82e-4c3b-85b9-5106a8168eec" containerName="registry-server" Dec 15 05:40:19 crc kubenswrapper[4747]: I1215 05:40:19.896653 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 15 05:40:19 crc kubenswrapper[4747]: I1215 05:40:19.899066 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 15 05:40:19 crc kubenswrapper[4747]: I1215 05:40:19.899540 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 15 05:40:19 crc kubenswrapper[4747]: I1215 05:40:19.903464 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 15 05:40:19 crc kubenswrapper[4747]: I1215 05:40:19.917748 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e320f755-38cd-4d68-8780-a935e60adf9c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e320f755-38cd-4d68-8780-a935e60adf9c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 15 05:40:19 crc kubenswrapper[4747]: I1215 05:40:19.917838 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e320f755-38cd-4d68-8780-a935e60adf9c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e320f755-38cd-4d68-8780-a935e60adf9c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 15 05:40:20 crc kubenswrapper[4747]: I1215 05:40:20.018515 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e320f755-38cd-4d68-8780-a935e60adf9c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e320f755-38cd-4d68-8780-a935e60adf9c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 15 05:40:20 crc kubenswrapper[4747]: I1215 05:40:20.018582 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e320f755-38cd-4d68-8780-a935e60adf9c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e320f755-38cd-4d68-8780-a935e60adf9c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 15 05:40:20 crc kubenswrapper[4747]: I1215 05:40:20.018650 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e320f755-38cd-4d68-8780-a935e60adf9c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e320f755-38cd-4d68-8780-a935e60adf9c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 15 05:40:20 crc kubenswrapper[4747]: I1215 05:40:20.036027 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e320f755-38cd-4d68-8780-a935e60adf9c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e320f755-38cd-4d68-8780-a935e60adf9c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 15 05:40:20 crc kubenswrapper[4747]: I1215 05:40:20.212632 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 15 05:40:20 crc kubenswrapper[4747]: I1215 05:40:20.367648 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 15 05:40:21 crc kubenswrapper[4747]: I1215 05:40:21.189504 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e320f755-38cd-4d68-8780-a935e60adf9c","Type":"ContainerStarted","Data":"144d12d67bd63a7c386290eba4858b6d460246b465c903e8816d3f249f5e2992"} Dec 15 05:40:21 crc kubenswrapper[4747]: I1215 05:40:21.189842 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e320f755-38cd-4d68-8780-a935e60adf9c","Type":"ContainerStarted","Data":"6d6dd491b51f433621b07ac3a9cecc88c0a1a8c8d4417d43bc45ba8a1ea363b8"} Dec 15 05:40:21 crc kubenswrapper[4747]: I1215 05:40:21.204840 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.204827121 podStartE2EDuration="2.204827121s" podCreationTimestamp="2025-12-15 05:40:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:40:21.202150419 +0000 UTC m=+184.898662346" watchObservedRunningTime="2025-12-15 05:40:21.204827121 +0000 UTC m=+184.901339039" Dec 15 05:40:22 crc kubenswrapper[4747]: I1215 05:40:22.195862 4747 generic.go:334] "Generic (PLEG): container finished" podID="e320f755-38cd-4d68-8780-a935e60adf9c" containerID="144d12d67bd63a7c386290eba4858b6d460246b465c903e8816d3f249f5e2992" exitCode=0 Dec 15 05:40:22 crc kubenswrapper[4747]: I1215 05:40:22.195982 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e320f755-38cd-4d68-8780-a935e60adf9c","Type":"ContainerDied","Data":"144d12d67bd63a7c386290eba4858b6d460246b465c903e8816d3f249f5e2992"} Dec 15 05:40:22 crc kubenswrapper[4747]: I1215 05:40:22.761002 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 15 05:40:23 crc kubenswrapper[4747]: I1215 05:40:23.411789 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 15 05:40:23 crc kubenswrapper[4747]: I1215 05:40:23.458132 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e320f755-38cd-4d68-8780-a935e60adf9c-kubelet-dir\") pod \"e320f755-38cd-4d68-8780-a935e60adf9c\" (UID: \"e320f755-38cd-4d68-8780-a935e60adf9c\") " Dec 15 05:40:23 crc kubenswrapper[4747]: I1215 05:40:23.458170 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e320f755-38cd-4d68-8780-a935e60adf9c-kube-api-access\") pod \"e320f755-38cd-4d68-8780-a935e60adf9c\" (UID: \"e320f755-38cd-4d68-8780-a935e60adf9c\") " Dec 15 05:40:23 crc kubenswrapper[4747]: I1215 05:40:23.458355 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e320f755-38cd-4d68-8780-a935e60adf9c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e320f755-38cd-4d68-8780-a935e60adf9c" (UID: "e320f755-38cd-4d68-8780-a935e60adf9c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 15 05:40:23 crc kubenswrapper[4747]: I1215 05:40:23.458528 4747 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e320f755-38cd-4d68-8780-a935e60adf9c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 15 05:40:23 crc kubenswrapper[4747]: I1215 05:40:23.464111 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e320f755-38cd-4d68-8780-a935e60adf9c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e320f755-38cd-4d68-8780-a935e60adf9c" (UID: "e320f755-38cd-4d68-8780-a935e60adf9c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:40:23 crc kubenswrapper[4747]: I1215 05:40:23.559768 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e320f755-38cd-4d68-8780-a935e60adf9c-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 15 05:40:24 crc kubenswrapper[4747]: I1215 05:40:24.207274 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e320f755-38cd-4d68-8780-a935e60adf9c","Type":"ContainerDied","Data":"6d6dd491b51f433621b07ac3a9cecc88c0a1a8c8d4417d43bc45ba8a1ea363b8"} Dec 15 05:40:24 crc kubenswrapper[4747]: I1215 05:40:24.207312 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 15 05:40:24 crc kubenswrapper[4747]: I1215 05:40:24.207322 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d6dd491b51f433621b07ac3a9cecc88c0a1a8c8d4417d43bc45ba8a1ea363b8" Dec 15 05:40:24 crc kubenswrapper[4747]: I1215 05:40:24.889360 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 15 05:40:24 crc kubenswrapper[4747]: E1215 05:40:24.889570 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e320f755-38cd-4d68-8780-a935e60adf9c" containerName="pruner" Dec 15 05:40:24 crc kubenswrapper[4747]: I1215 05:40:24.889583 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="e320f755-38cd-4d68-8780-a935e60adf9c" containerName="pruner" Dec 15 05:40:24 crc kubenswrapper[4747]: I1215 05:40:24.889688 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="e320f755-38cd-4d68-8780-a935e60adf9c" containerName="pruner" Dec 15 05:40:24 crc kubenswrapper[4747]: I1215 05:40:24.890083 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 15 05:40:24 crc kubenswrapper[4747]: I1215 05:40:24.891901 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 15 05:40:24 crc kubenswrapper[4747]: I1215 05:40:24.892501 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 15 05:40:24 crc kubenswrapper[4747]: I1215 05:40:24.896021 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 15 05:40:24 crc kubenswrapper[4747]: I1215 05:40:24.981423 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2ccdb417-62a2-4f3a-8b63-742cfee41cde-kubelet-dir\") pod \"installer-9-crc\" (UID: \"2ccdb417-62a2-4f3a-8b63-742cfee41cde\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 15 05:40:24 crc kubenswrapper[4747]: I1215 05:40:24.981514 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ccdb417-62a2-4f3a-8b63-742cfee41cde-kube-api-access\") pod \"installer-9-crc\" (UID: \"2ccdb417-62a2-4f3a-8b63-742cfee41cde\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 15 05:40:24 crc kubenswrapper[4747]: I1215 05:40:24.981701 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2ccdb417-62a2-4f3a-8b63-742cfee41cde-var-lock\") pod \"installer-9-crc\" (UID: \"2ccdb417-62a2-4f3a-8b63-742cfee41cde\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 15 05:40:25 crc kubenswrapper[4747]: I1215 05:40:25.083222 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2ccdb417-62a2-4f3a-8b63-742cfee41cde-var-lock\") pod \"installer-9-crc\" (UID: \"2ccdb417-62a2-4f3a-8b63-742cfee41cde\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 15 05:40:25 crc kubenswrapper[4747]: I1215 05:40:25.083287 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2ccdb417-62a2-4f3a-8b63-742cfee41cde-kubelet-dir\") pod \"installer-9-crc\" (UID: \"2ccdb417-62a2-4f3a-8b63-742cfee41cde\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 15 05:40:25 crc kubenswrapper[4747]: I1215 05:40:25.083343 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2ccdb417-62a2-4f3a-8b63-742cfee41cde-var-lock\") pod \"installer-9-crc\" (UID: \"2ccdb417-62a2-4f3a-8b63-742cfee41cde\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 15 05:40:25 crc kubenswrapper[4747]: I1215 05:40:25.083354 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ccdb417-62a2-4f3a-8b63-742cfee41cde-kube-api-access\") pod \"installer-9-crc\" (UID: \"2ccdb417-62a2-4f3a-8b63-742cfee41cde\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 15 05:40:25 crc kubenswrapper[4747]: I1215 05:40:25.083473 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2ccdb417-62a2-4f3a-8b63-742cfee41cde-kubelet-dir\") pod \"installer-9-crc\" (UID: \"2ccdb417-62a2-4f3a-8b63-742cfee41cde\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 15 05:40:25 crc kubenswrapper[4747]: I1215 05:40:25.099535 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ccdb417-62a2-4f3a-8b63-742cfee41cde-kube-api-access\") pod \"installer-9-crc\" (UID: \"2ccdb417-62a2-4f3a-8b63-742cfee41cde\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 15 05:40:25 crc kubenswrapper[4747]: I1215 05:40:25.205039 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 15 05:40:25 crc kubenswrapper[4747]: I1215 05:40:25.574991 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 15 05:40:26 crc kubenswrapper[4747]: I1215 05:40:26.218646 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2ccdb417-62a2-4f3a-8b63-742cfee41cde","Type":"ContainerStarted","Data":"e3fcf7d329060c7efe9fa909dd12bc96c5a2d1261d5886ac9f0a5716e4e15ee0"} Dec 15 05:40:26 crc kubenswrapper[4747]: I1215 05:40:26.219045 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2ccdb417-62a2-4f3a-8b63-742cfee41cde","Type":"ContainerStarted","Data":"5be19ea9260fc3a20fdc82ec1ccb2941c1cb599ad5b4e4a3f3e1ea3baa898f3e"} Dec 15 05:40:26 crc kubenswrapper[4747]: I1215 05:40:26.235322 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.235303827 podStartE2EDuration="2.235303827s" podCreationTimestamp="2025-12-15 05:40:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:40:26.233648655 +0000 UTC m=+189.930160572" watchObservedRunningTime="2025-12-15 05:40:26.235303827 +0000 UTC m=+189.931815744" Dec 15 05:40:28 crc kubenswrapper[4747]: I1215 05:40:28.865452 4747 patch_prober.go:28] interesting pod/machine-config-daemon-nldtn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 15 05:40:28 crc kubenswrapper[4747]: I1215 05:40:28.865746 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 15 05:40:34 crc kubenswrapper[4747]: I1215 05:40:34.963307 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-v472l" podUID="caa99f47-6c6a-4642-b2eb-946507229c80" containerName="oauth-openshift" containerID="cri-o://427bce72ad2a9bd08f903a76cc07c1dcbd2b2908c5078234548d304030d7ebb2" gracePeriod=15 Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.259618 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-v472l" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.270178 4747 generic.go:334] "Generic (PLEG): container finished" podID="caa99f47-6c6a-4642-b2eb-946507229c80" containerID="427bce72ad2a9bd08f903a76cc07c1dcbd2b2908c5078234548d304030d7ebb2" exitCode=0 Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.270234 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-v472l" event={"ID":"caa99f47-6c6a-4642-b2eb-946507229c80","Type":"ContainerDied","Data":"427bce72ad2a9bd08f903a76cc07c1dcbd2b2908c5078234548d304030d7ebb2"} Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.270265 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-v472l" event={"ID":"caa99f47-6c6a-4642-b2eb-946507229c80","Type":"ContainerDied","Data":"9d3f95fe8a8f6cdf026d7b270b802ff1506cd7694c102b2044aed9587a7e09ba"} Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.270284 4747 scope.go:117] "RemoveContainer" containerID="427bce72ad2a9bd08f903a76cc07c1dcbd2b2908c5078234548d304030d7ebb2" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.270420 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-v472l" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.291018 4747 scope.go:117] "RemoveContainer" containerID="427bce72ad2a9bd08f903a76cc07c1dcbd2b2908c5078234548d304030d7ebb2" Dec 15 05:40:35 crc kubenswrapper[4747]: E1215 05:40:35.291366 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"427bce72ad2a9bd08f903a76cc07c1dcbd2b2908c5078234548d304030d7ebb2\": container with ID starting with 427bce72ad2a9bd08f903a76cc07c1dcbd2b2908c5078234548d304030d7ebb2 not found: ID does not exist" containerID="427bce72ad2a9bd08f903a76cc07c1dcbd2b2908c5078234548d304030d7ebb2" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.291403 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"427bce72ad2a9bd08f903a76cc07c1dcbd2b2908c5078234548d304030d7ebb2"} err="failed to get container status \"427bce72ad2a9bd08f903a76cc07c1dcbd2b2908c5078234548d304030d7ebb2\": rpc error: code = NotFound desc = could not find container \"427bce72ad2a9bd08f903a76cc07c1dcbd2b2908c5078234548d304030d7ebb2\": container with ID starting with 427bce72ad2a9bd08f903a76cc07c1dcbd2b2908c5078234548d304030d7ebb2 not found: ID does not exist" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.394187 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-user-idp-0-file-data\") pod \"caa99f47-6c6a-4642-b2eb-946507229c80\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.394250 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/caa99f47-6c6a-4642-b2eb-946507229c80-audit-dir\") pod \"caa99f47-6c6a-4642-b2eb-946507229c80\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.394277 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-user-template-provider-selection\") pod \"caa99f47-6c6a-4642-b2eb-946507229c80\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.394330 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-system-router-certs\") pod \"caa99f47-6c6a-4642-b2eb-946507229c80\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.394359 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-system-ocp-branding-template\") pod \"caa99f47-6c6a-4642-b2eb-946507229c80\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.394380 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgj52\" (UniqueName: \"kubernetes.io/projected/caa99f47-6c6a-4642-b2eb-946507229c80-kube-api-access-qgj52\") pod \"caa99f47-6c6a-4642-b2eb-946507229c80\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.394407 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-user-template-error\") pod \"caa99f47-6c6a-4642-b2eb-946507229c80\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.394408 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/caa99f47-6c6a-4642-b2eb-946507229c80-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "caa99f47-6c6a-4642-b2eb-946507229c80" (UID: "caa99f47-6c6a-4642-b2eb-946507229c80"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.394441 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-user-template-login\") pod \"caa99f47-6c6a-4642-b2eb-946507229c80\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.394517 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/caa99f47-6c6a-4642-b2eb-946507229c80-audit-policies\") pod \"caa99f47-6c6a-4642-b2eb-946507229c80\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.394542 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-system-session\") pod \"caa99f47-6c6a-4642-b2eb-946507229c80\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.394573 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-system-serving-cert\") pod \"caa99f47-6c6a-4642-b2eb-946507229c80\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.394591 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-system-service-ca\") pod \"caa99f47-6c6a-4642-b2eb-946507229c80\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.394607 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-system-cliconfig\") pod \"caa99f47-6c6a-4642-b2eb-946507229c80\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.394638 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-system-trusted-ca-bundle\") pod \"caa99f47-6c6a-4642-b2eb-946507229c80\" (UID: \"caa99f47-6c6a-4642-b2eb-946507229c80\") " Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.394995 4747 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/caa99f47-6c6a-4642-b2eb-946507229c80-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.396023 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "caa99f47-6c6a-4642-b2eb-946507229c80" (UID: "caa99f47-6c6a-4642-b2eb-946507229c80"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.396271 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "caa99f47-6c6a-4642-b2eb-946507229c80" (UID: "caa99f47-6c6a-4642-b2eb-946507229c80"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.396292 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/caa99f47-6c6a-4642-b2eb-946507229c80-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "caa99f47-6c6a-4642-b2eb-946507229c80" (UID: "caa99f47-6c6a-4642-b2eb-946507229c80"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.396450 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "caa99f47-6c6a-4642-b2eb-946507229c80" (UID: "caa99f47-6c6a-4642-b2eb-946507229c80"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.401420 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "caa99f47-6c6a-4642-b2eb-946507229c80" (UID: "caa99f47-6c6a-4642-b2eb-946507229c80"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.401613 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "caa99f47-6c6a-4642-b2eb-946507229c80" (UID: "caa99f47-6c6a-4642-b2eb-946507229c80"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.401630 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caa99f47-6c6a-4642-b2eb-946507229c80-kube-api-access-qgj52" (OuterVolumeSpecName: "kube-api-access-qgj52") pod "caa99f47-6c6a-4642-b2eb-946507229c80" (UID: "caa99f47-6c6a-4642-b2eb-946507229c80"). InnerVolumeSpecName "kube-api-access-qgj52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.401916 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "caa99f47-6c6a-4642-b2eb-946507229c80" (UID: "caa99f47-6c6a-4642-b2eb-946507229c80"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.402363 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "caa99f47-6c6a-4642-b2eb-946507229c80" (UID: "caa99f47-6c6a-4642-b2eb-946507229c80"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.402708 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "caa99f47-6c6a-4642-b2eb-946507229c80" (UID: "caa99f47-6c6a-4642-b2eb-946507229c80"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.402854 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "caa99f47-6c6a-4642-b2eb-946507229c80" (UID: "caa99f47-6c6a-4642-b2eb-946507229c80"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.403094 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "caa99f47-6c6a-4642-b2eb-946507229c80" (UID: "caa99f47-6c6a-4642-b2eb-946507229c80"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.403454 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "caa99f47-6c6a-4642-b2eb-946507229c80" (UID: "caa99f47-6c6a-4642-b2eb-946507229c80"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.496131 4747 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.496165 4747 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.496176 4747 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.496188 4747 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.496200 4747 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.496210 4747 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.496225 4747 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.496235 4747 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.496246 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgj52\" (UniqueName: \"kubernetes.io/projected/caa99f47-6c6a-4642-b2eb-946507229c80-kube-api-access-qgj52\") on node \"crc\" DevicePath \"\"" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.496258 4747 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.496268 4747 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.496281 4747 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/caa99f47-6c6a-4642-b2eb-946507229c80-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.496290 4747 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/caa99f47-6c6a-4642-b2eb-946507229c80-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.592543 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-v472l"] Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.594993 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-v472l"] Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.934788 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-79945c7d7f-gmmlv"] Dec 15 05:40:35 crc kubenswrapper[4747]: E1215 05:40:35.935395 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caa99f47-6c6a-4642-b2eb-946507229c80" containerName="oauth-openshift" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.935410 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="caa99f47-6c6a-4642-b2eb-946507229c80" containerName="oauth-openshift" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.935499 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="caa99f47-6c6a-4642-b2eb-946507229c80" containerName="oauth-openshift" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.935896 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-79945c7d7f-gmmlv" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.938887 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.939364 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.940402 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.940748 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.940882 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.940984 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.941041 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.941348 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.941663 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.942044 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.942295 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.942389 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.947641 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.950544 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.951383 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-79945c7d7f-gmmlv"] Dec 15 05:40:35 crc kubenswrapper[4747]: I1215 05:40:35.961292 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 15 05:40:36 crc kubenswrapper[4747]: I1215 05:40:36.106901 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/695b376b-917b-4684-a9d1-4b268999e027-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-79945c7d7f-gmmlv\" (UID: \"695b376b-917b-4684-a9d1-4b268999e027\") " pod="openshift-authentication/oauth-openshift-79945c7d7f-gmmlv" Dec 15 05:40:36 crc kubenswrapper[4747]: I1215 05:40:36.107770 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/695b376b-917b-4684-a9d1-4b268999e027-audit-dir\") pod \"oauth-openshift-79945c7d7f-gmmlv\" (UID: \"695b376b-917b-4684-a9d1-4b268999e027\") " pod="openshift-authentication/oauth-openshift-79945c7d7f-gmmlv" Dec 15 05:40:36 crc kubenswrapper[4747]: I1215 05:40:36.107919 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/695b376b-917b-4684-a9d1-4b268999e027-audit-policies\") pod \"oauth-openshift-79945c7d7f-gmmlv\" (UID: \"695b376b-917b-4684-a9d1-4b268999e027\") " pod="openshift-authentication/oauth-openshift-79945c7d7f-gmmlv" Dec 15 05:40:36 crc kubenswrapper[4747]: I1215 05:40:36.108112 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/695b376b-917b-4684-a9d1-4b268999e027-v4-0-config-system-service-ca\") pod \"oauth-openshift-79945c7d7f-gmmlv\" (UID: \"695b376b-917b-4684-a9d1-4b268999e027\") " pod="openshift-authentication/oauth-openshift-79945c7d7f-gmmlv" Dec 15 05:40:36 crc kubenswrapper[4747]: I1215 05:40:36.108232 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd2qz\" (UniqueName: \"kubernetes.io/projected/695b376b-917b-4684-a9d1-4b268999e027-kube-api-access-pd2qz\") pod \"oauth-openshift-79945c7d7f-gmmlv\" (UID: \"695b376b-917b-4684-a9d1-4b268999e027\") " pod="openshift-authentication/oauth-openshift-79945c7d7f-gmmlv" Dec 15 05:40:36 crc kubenswrapper[4747]: I1215 05:40:36.108328 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/695b376b-917b-4684-a9d1-4b268999e027-v4-0-config-user-template-login\") pod \"oauth-openshift-79945c7d7f-gmmlv\" (UID: \"695b376b-917b-4684-a9d1-4b268999e027\") " pod="openshift-authentication/oauth-openshift-79945c7d7f-gmmlv" Dec 15 05:40:36 crc kubenswrapper[4747]: I1215 05:40:36.108431 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/695b376b-917b-4684-a9d1-4b268999e027-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79945c7d7f-gmmlv\" (UID: \"695b376b-917b-4684-a9d1-4b268999e027\") " pod="openshift-authentication/oauth-openshift-79945c7d7f-gmmlv" Dec 15 05:40:36 crc kubenswrapper[4747]: I1215 05:40:36.108543 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/695b376b-917b-4684-a9d1-4b268999e027-v4-0-config-user-template-error\") pod \"oauth-openshift-79945c7d7f-gmmlv\" (UID: \"695b376b-917b-4684-a9d1-4b268999e027\") " pod="openshift-authentication/oauth-openshift-79945c7d7f-gmmlv" Dec 15 05:40:36 crc kubenswrapper[4747]: I1215 05:40:36.108640 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/695b376b-917b-4684-a9d1-4b268999e027-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79945c7d7f-gmmlv\" (UID: \"695b376b-917b-4684-a9d1-4b268999e027\") " pod="openshift-authentication/oauth-openshift-79945c7d7f-gmmlv" Dec 15 05:40:36 crc kubenswrapper[4747]: I1215 05:40:36.108775 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/695b376b-917b-4684-a9d1-4b268999e027-v4-0-config-system-router-certs\") pod \"oauth-openshift-79945c7d7f-gmmlv\" (UID: \"695b376b-917b-4684-a9d1-4b268999e027\") " pod="openshift-authentication/oauth-openshift-79945c7d7f-gmmlv" Dec 15 05:40:36 crc kubenswrapper[4747]: I1215 05:40:36.108869 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/695b376b-917b-4684-a9d1-4b268999e027-v4-0-config-system-session\") pod \"oauth-openshift-79945c7d7f-gmmlv\" (UID: \"695b376b-917b-4684-a9d1-4b268999e027\") " pod="openshift-authentication/oauth-openshift-79945c7d7f-gmmlv" Dec 15 05:40:36 crc kubenswrapper[4747]: I1215 05:40:36.108955 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/695b376b-917b-4684-a9d1-4b268999e027-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79945c7d7f-gmmlv\" (UID: \"695b376b-917b-4684-a9d1-4b268999e027\") " pod="openshift-authentication/oauth-openshift-79945c7d7f-gmmlv" Dec 15 05:40:36 crc kubenswrapper[4747]: I1215 05:40:36.109048 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/695b376b-917b-4684-a9d1-4b268999e027-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79945c7d7f-gmmlv\" (UID: \"695b376b-917b-4684-a9d1-4b268999e027\") " pod="openshift-authentication/oauth-openshift-79945c7d7f-gmmlv" Dec 15 05:40:36 crc kubenswrapper[4747]: I1215 05:40:36.109133 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/695b376b-917b-4684-a9d1-4b268999e027-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79945c7d7f-gmmlv\" (UID: \"695b376b-917b-4684-a9d1-4b268999e027\") " pod="openshift-authentication/oauth-openshift-79945c7d7f-gmmlv" Dec 15 05:40:36 crc kubenswrapper[4747]: I1215 05:40:36.210804 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/695b376b-917b-4684-a9d1-4b268999e027-audit-dir\") pod \"oauth-openshift-79945c7d7f-gmmlv\" (UID: \"695b376b-917b-4684-a9d1-4b268999e027\") " pod="openshift-authentication/oauth-openshift-79945c7d7f-gmmlv" Dec 15 05:40:36 crc kubenswrapper[4747]: I1215 05:40:36.210969 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/695b376b-917b-4684-a9d1-4b268999e027-audit-dir\") pod \"oauth-openshift-79945c7d7f-gmmlv\" (UID: \"695b376b-917b-4684-a9d1-4b268999e027\") " pod="openshift-authentication/oauth-openshift-79945c7d7f-gmmlv" Dec 15 05:40:36 crc kubenswrapper[4747]: I1215 05:40:36.211108 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/695b376b-917b-4684-a9d1-4b268999e027-audit-policies\") pod \"oauth-openshift-79945c7d7f-gmmlv\" (UID: \"695b376b-917b-4684-a9d1-4b268999e027\") " pod="openshift-authentication/oauth-openshift-79945c7d7f-gmmlv" Dec 15 05:40:36 crc kubenswrapper[4747]: I1215 05:40:36.211263 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/695b376b-917b-4684-a9d1-4b268999e027-v4-0-config-system-service-ca\") pod \"oauth-openshift-79945c7d7f-gmmlv\" (UID: \"695b376b-917b-4684-a9d1-4b268999e027\") " pod="openshift-authentication/oauth-openshift-79945c7d7f-gmmlv" Dec 15 05:40:36 crc kubenswrapper[4747]: I1215 05:40:36.211310 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pd2qz\" (UniqueName: \"kubernetes.io/projected/695b376b-917b-4684-a9d1-4b268999e027-kube-api-access-pd2qz\") pod \"oauth-openshift-79945c7d7f-gmmlv\" (UID: \"695b376b-917b-4684-a9d1-4b268999e027\") " pod="openshift-authentication/oauth-openshift-79945c7d7f-gmmlv" Dec 15 05:40:36 crc kubenswrapper[4747]: I1215 05:40:36.211354 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/695b376b-917b-4684-a9d1-4b268999e027-v4-0-config-user-template-login\") pod \"oauth-openshift-79945c7d7f-gmmlv\" (UID: \"695b376b-917b-4684-a9d1-4b268999e027\") " pod="openshift-authentication/oauth-openshift-79945c7d7f-gmmlv" Dec 15 05:40:36 crc kubenswrapper[4747]: I1215 05:40:36.211392 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/695b376b-917b-4684-a9d1-4b268999e027-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79945c7d7f-gmmlv\" (UID: \"695b376b-917b-4684-a9d1-4b268999e027\") " pod="openshift-authentication/oauth-openshift-79945c7d7f-gmmlv" Dec 15 05:40:36 crc kubenswrapper[4747]: I1215 05:40:36.211449 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/695b376b-917b-4684-a9d1-4b268999e027-v4-0-config-user-template-error\") pod \"oauth-openshift-79945c7d7f-gmmlv\" (UID: \"695b376b-917b-4684-a9d1-4b268999e027\") " pod="openshift-authentication/oauth-openshift-79945c7d7f-gmmlv" Dec 15 05:40:36 crc kubenswrapper[4747]: I1215 05:40:36.211476 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/695b376b-917b-4684-a9d1-4b268999e027-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79945c7d7f-gmmlv\" (UID: \"695b376b-917b-4684-a9d1-4b268999e027\") " pod="openshift-authentication/oauth-openshift-79945c7d7f-gmmlv" Dec 15 05:40:36 crc kubenswrapper[4747]: I1215 05:40:36.211515 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/695b376b-917b-4684-a9d1-4b268999e027-v4-0-config-system-router-certs\") pod \"oauth-openshift-79945c7d7f-gmmlv\" (UID: \"695b376b-917b-4684-a9d1-4b268999e027\") " pod="openshift-authentication/oauth-openshift-79945c7d7f-gmmlv" Dec 15 05:40:36 crc kubenswrapper[4747]: I1215 05:40:36.211550 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/695b376b-917b-4684-a9d1-4b268999e027-v4-0-config-system-session\") pod \"oauth-openshift-79945c7d7f-gmmlv\" (UID: \"695b376b-917b-4684-a9d1-4b268999e027\") " pod="openshift-authentication/oauth-openshift-79945c7d7f-gmmlv" Dec 15 05:40:36 crc kubenswrapper[4747]: I1215 05:40:36.211582 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/695b376b-917b-4684-a9d1-4b268999e027-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79945c7d7f-gmmlv\" (UID: \"695b376b-917b-4684-a9d1-4b268999e027\") " pod="openshift-authentication/oauth-openshift-79945c7d7f-gmmlv" Dec 15 05:40:36 crc kubenswrapper[4747]: I1215 05:40:36.211600 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/695b376b-917b-4684-a9d1-4b268999e027-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79945c7d7f-gmmlv\" (UID: \"695b376b-917b-4684-a9d1-4b268999e027\") " pod="openshift-authentication/oauth-openshift-79945c7d7f-gmmlv" Dec 15 05:40:36 crc kubenswrapper[4747]: I1215 05:40:36.211617 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/695b376b-917b-4684-a9d1-4b268999e027-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79945c7d7f-gmmlv\" (UID: \"695b376b-917b-4684-a9d1-4b268999e027\") " pod="openshift-authentication/oauth-openshift-79945c7d7f-gmmlv" Dec 15 05:40:36 crc kubenswrapper[4747]: I1215 05:40:36.211684 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/695b376b-917b-4684-a9d1-4b268999e027-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-79945c7d7f-gmmlv\" (UID: \"695b376b-917b-4684-a9d1-4b268999e027\") " pod="openshift-authentication/oauth-openshift-79945c7d7f-gmmlv" Dec 15 05:40:36 crc kubenswrapper[4747]: I1215 05:40:36.212109 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/695b376b-917b-4684-a9d1-4b268999e027-v4-0-config-system-service-ca\") pod \"oauth-openshift-79945c7d7f-gmmlv\" (UID: \"695b376b-917b-4684-a9d1-4b268999e027\") " pod="openshift-authentication/oauth-openshift-79945c7d7f-gmmlv" Dec 15 05:40:36 crc kubenswrapper[4747]: I1215 05:40:36.212592 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/695b376b-917b-4684-a9d1-4b268999e027-audit-policies\") pod \"oauth-openshift-79945c7d7f-gmmlv\" (UID: \"695b376b-917b-4684-a9d1-4b268999e027\") " pod="openshift-authentication/oauth-openshift-79945c7d7f-gmmlv" Dec 15 05:40:36 crc kubenswrapper[4747]: I1215 05:40:36.212808 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/695b376b-917b-4684-a9d1-4b268999e027-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79945c7d7f-gmmlv\" (UID: \"695b376b-917b-4684-a9d1-4b268999e027\") " pod="openshift-authentication/oauth-openshift-79945c7d7f-gmmlv" Dec 15 05:40:36 crc kubenswrapper[4747]: I1215 05:40:36.213402 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/695b376b-917b-4684-a9d1-4b268999e027-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79945c7d7f-gmmlv\" (UID: \"695b376b-917b-4684-a9d1-4b268999e027\") " pod="openshift-authentication/oauth-openshift-79945c7d7f-gmmlv" Dec 15 05:40:36 crc kubenswrapper[4747]: I1215 05:40:36.215084 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/695b376b-917b-4684-a9d1-4b268999e027-v4-0-config-system-router-certs\") pod \"oauth-openshift-79945c7d7f-gmmlv\" (UID: \"695b376b-917b-4684-a9d1-4b268999e027\") " pod="openshift-authentication/oauth-openshift-79945c7d7f-gmmlv" Dec 15 05:40:36 crc kubenswrapper[4747]: I1215 05:40:36.215094 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/695b376b-917b-4684-a9d1-4b268999e027-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-79945c7d7f-gmmlv\" (UID: \"695b376b-917b-4684-a9d1-4b268999e027\") " pod="openshift-authentication/oauth-openshift-79945c7d7f-gmmlv" Dec 15 05:40:36 crc kubenswrapper[4747]: I1215 05:40:36.215655 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/695b376b-917b-4684-a9d1-4b268999e027-v4-0-config-user-template-login\") pod \"oauth-openshift-79945c7d7f-gmmlv\" (UID: \"695b376b-917b-4684-a9d1-4b268999e027\") " pod="openshift-authentication/oauth-openshift-79945c7d7f-gmmlv" Dec 15 05:40:36 crc kubenswrapper[4747]: I1215 05:40:36.215841 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/695b376b-917b-4684-a9d1-4b268999e027-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79945c7d7f-gmmlv\" (UID: \"695b376b-917b-4684-a9d1-4b268999e027\") " pod="openshift-authentication/oauth-openshift-79945c7d7f-gmmlv" Dec 15 05:40:36 crc kubenswrapper[4747]: I1215 05:40:36.215863 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/695b376b-917b-4684-a9d1-4b268999e027-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79945c7d7f-gmmlv\" (UID: \"695b376b-917b-4684-a9d1-4b268999e027\") " pod="openshift-authentication/oauth-openshift-79945c7d7f-gmmlv" Dec 15 05:40:36 crc kubenswrapper[4747]: I1215 05:40:36.216267 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/695b376b-917b-4684-a9d1-4b268999e027-v4-0-config-user-template-error\") pod \"oauth-openshift-79945c7d7f-gmmlv\" (UID: \"695b376b-917b-4684-a9d1-4b268999e027\") " pod="openshift-authentication/oauth-openshift-79945c7d7f-gmmlv" Dec 15 05:40:36 crc kubenswrapper[4747]: I1215 05:40:36.216803 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/695b376b-917b-4684-a9d1-4b268999e027-v4-0-config-system-session\") pod \"oauth-openshift-79945c7d7f-gmmlv\" (UID: \"695b376b-917b-4684-a9d1-4b268999e027\") " pod="openshift-authentication/oauth-openshift-79945c7d7f-gmmlv" Dec 15 05:40:36 crc kubenswrapper[4747]: I1215 05:40:36.217010 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/695b376b-917b-4684-a9d1-4b268999e027-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79945c7d7f-gmmlv\" (UID: \"695b376b-917b-4684-a9d1-4b268999e027\") " pod="openshift-authentication/oauth-openshift-79945c7d7f-gmmlv" Dec 15 05:40:36 crc kubenswrapper[4747]: I1215 05:40:36.226384 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pd2qz\" (UniqueName: \"kubernetes.io/projected/695b376b-917b-4684-a9d1-4b268999e027-kube-api-access-pd2qz\") pod \"oauth-openshift-79945c7d7f-gmmlv\" (UID: \"695b376b-917b-4684-a9d1-4b268999e027\") " pod="openshift-authentication/oauth-openshift-79945c7d7f-gmmlv" Dec 15 05:40:36 crc kubenswrapper[4747]: I1215 05:40:36.249433 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-79945c7d7f-gmmlv" Dec 15 05:40:36 crc kubenswrapper[4747]: I1215 05:40:36.606177 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-79945c7d7f-gmmlv"] Dec 15 05:40:36 crc kubenswrapper[4747]: I1215 05:40:36.634414 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="caa99f47-6c6a-4642-b2eb-946507229c80" path="/var/lib/kubelet/pods/caa99f47-6c6a-4642-b2eb-946507229c80/volumes" Dec 15 05:40:37 crc kubenswrapper[4747]: I1215 05:40:37.281900 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-79945c7d7f-gmmlv" event={"ID":"695b376b-917b-4684-a9d1-4b268999e027","Type":"ContainerStarted","Data":"fb5ce5b9535a2d23be7faa7586671fb4c277fca7039392b314d687068a7a4f1d"} Dec 15 05:40:37 crc kubenswrapper[4747]: I1215 05:40:37.283247 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-79945c7d7f-gmmlv" event={"ID":"695b376b-917b-4684-a9d1-4b268999e027","Type":"ContainerStarted","Data":"15913b6238557dad3a1c091db739f75793956f6e97cbb2a454de27bc745a60c4"} Dec 15 05:40:37 crc kubenswrapper[4747]: I1215 05:40:37.283328 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-79945c7d7f-gmmlv" Dec 15 05:40:37 crc kubenswrapper[4747]: I1215 05:40:37.299053 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-79945c7d7f-gmmlv" podStartSLOduration=28.299029672 podStartE2EDuration="28.299029672s" podCreationTimestamp="2025-12-15 05:40:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:40:37.297386985 +0000 UTC m=+200.993898902" watchObservedRunningTime="2025-12-15 05:40:37.299029672 +0000 UTC m=+200.995541589" Dec 15 05:40:37 crc kubenswrapper[4747]: I1215 05:40:37.321638 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-79945c7d7f-gmmlv" Dec 15 05:40:45 crc kubenswrapper[4747]: I1215 05:40:45.613012 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f66lm"] Dec 15 05:40:45 crc kubenswrapper[4747]: I1215 05:40:45.613968 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-f66lm" podUID="9a524d92-a1c1-4494-b487-ba0df0e6a1ec" containerName="registry-server" containerID="cri-o://e45fe7d5f93326148303da0b04cf5d9ee9bdcdd1dd56d6b397cf9d7ac983ede3" gracePeriod=30 Dec 15 05:40:45 crc kubenswrapper[4747]: I1215 05:40:45.622696 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6zrdj"] Dec 15 05:40:45 crc kubenswrapper[4747]: I1215 05:40:45.622961 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6zrdj" podUID="68eca474-5187-41ca-b67f-cb316a4ab410" containerName="registry-server" containerID="cri-o://e5a5c4d573912ab05e31be526d1bcc2fab630bc4d43f092db81c43e6d63448ab" gracePeriod=30 Dec 15 05:40:45 crc kubenswrapper[4747]: I1215 05:40:45.627205 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mvljn"] Dec 15 05:40:45 crc kubenswrapper[4747]: I1215 05:40:45.627373 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-mvljn" podUID="3aed04d0-4166-4ed3-bf2b-39e9598d0160" containerName="marketplace-operator" containerID="cri-o://bc0558e63876a6f899739254fec5f7486396a430a3468c432efc2e1e673235bd" gracePeriod=30 Dec 15 05:40:45 crc kubenswrapper[4747]: I1215 05:40:45.632554 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gxjhc"] Dec 15 05:40:45 crc kubenswrapper[4747]: I1215 05:40:45.632828 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gxjhc" podUID="496c9f6b-020f-4ba2-9031-4dfee47f18ab" containerName="registry-server" containerID="cri-o://b993594355de75c6a85a73c13dff97f4d7f7ad7eb0dd3b68fcc4e383a6b457ed" gracePeriod=30 Dec 15 05:40:45 crc kubenswrapper[4747]: I1215 05:40:45.641710 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qr5lt"] Dec 15 05:40:45 crc kubenswrapper[4747]: I1215 05:40:45.642528 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qr5lt" Dec 15 05:40:45 crc kubenswrapper[4747]: I1215 05:40:45.644232 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hhfhz"] Dec 15 05:40:45 crc kubenswrapper[4747]: I1215 05:40:45.644467 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hhfhz" podUID="97c38c52-062a-4f94-9992-f944bb0519ee" containerName="registry-server" containerID="cri-o://a55b037d1361bf6bdeb116e53646d89dd20ba8c32333ecf135a3cbfff6a69724" gracePeriod=30 Dec 15 05:40:45 crc kubenswrapper[4747]: I1215 05:40:45.660425 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qr5lt"] Dec 15 05:40:45 crc kubenswrapper[4747]: I1215 05:40:45.823290 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc8ff\" (UniqueName: \"kubernetes.io/projected/f22206aa-87c5-4c96-b146-53b0890697fa-kube-api-access-fc8ff\") pod \"marketplace-operator-79b997595-qr5lt\" (UID: \"f22206aa-87c5-4c96-b146-53b0890697fa\") " pod="openshift-marketplace/marketplace-operator-79b997595-qr5lt" Dec 15 05:40:45 crc kubenswrapper[4747]: I1215 05:40:45.823757 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f22206aa-87c5-4c96-b146-53b0890697fa-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qr5lt\" (UID: \"f22206aa-87c5-4c96-b146-53b0890697fa\") " pod="openshift-marketplace/marketplace-operator-79b997595-qr5lt" Dec 15 05:40:45 crc kubenswrapper[4747]: I1215 05:40:45.823810 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f22206aa-87c5-4c96-b146-53b0890697fa-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qr5lt\" (UID: \"f22206aa-87c5-4c96-b146-53b0890697fa\") " pod="openshift-marketplace/marketplace-operator-79b997595-qr5lt" Dec 15 05:40:45 crc kubenswrapper[4747]: I1215 05:40:45.925168 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc8ff\" (UniqueName: \"kubernetes.io/projected/f22206aa-87c5-4c96-b146-53b0890697fa-kube-api-access-fc8ff\") pod \"marketplace-operator-79b997595-qr5lt\" (UID: \"f22206aa-87c5-4c96-b146-53b0890697fa\") " pod="openshift-marketplace/marketplace-operator-79b997595-qr5lt" Dec 15 05:40:45 crc kubenswrapper[4747]: I1215 05:40:45.925268 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f22206aa-87c5-4c96-b146-53b0890697fa-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qr5lt\" (UID: \"f22206aa-87c5-4c96-b146-53b0890697fa\") " pod="openshift-marketplace/marketplace-operator-79b997595-qr5lt" Dec 15 05:40:45 crc kubenswrapper[4747]: I1215 05:40:45.925298 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f22206aa-87c5-4c96-b146-53b0890697fa-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qr5lt\" (UID: \"f22206aa-87c5-4c96-b146-53b0890697fa\") " pod="openshift-marketplace/marketplace-operator-79b997595-qr5lt" Dec 15 05:40:45 crc kubenswrapper[4747]: I1215 05:40:45.926606 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f22206aa-87c5-4c96-b146-53b0890697fa-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qr5lt\" (UID: \"f22206aa-87c5-4c96-b146-53b0890697fa\") " pod="openshift-marketplace/marketplace-operator-79b997595-qr5lt" Dec 15 05:40:45 crc kubenswrapper[4747]: I1215 05:40:45.935704 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f22206aa-87c5-4c96-b146-53b0890697fa-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qr5lt\" (UID: \"f22206aa-87c5-4c96-b146-53b0890697fa\") " pod="openshift-marketplace/marketplace-operator-79b997595-qr5lt" Dec 15 05:40:45 crc kubenswrapper[4747]: I1215 05:40:45.942378 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc8ff\" (UniqueName: \"kubernetes.io/projected/f22206aa-87c5-4c96-b146-53b0890697fa-kube-api-access-fc8ff\") pod \"marketplace-operator-79b997595-qr5lt\" (UID: \"f22206aa-87c5-4c96-b146-53b0890697fa\") " pod="openshift-marketplace/marketplace-operator-79b997595-qr5lt" Dec 15 05:40:45 crc kubenswrapper[4747]: I1215 05:40:45.958139 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qr5lt" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.051093 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f66lm" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.052381 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hhfhz" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.054746 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mvljn" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.085461 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gxjhc" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.102361 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6zrdj" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.233173 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97c38c52-062a-4f94-9992-f944bb0519ee-utilities\") pod \"97c38c52-062a-4f94-9992-f944bb0519ee\" (UID: \"97c38c52-062a-4f94-9992-f944bb0519ee\") " Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.233234 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68eca474-5187-41ca-b67f-cb316a4ab410-utilities\") pod \"68eca474-5187-41ca-b67f-cb316a4ab410\" (UID: \"68eca474-5187-41ca-b67f-cb316a4ab410\") " Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.233270 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/496c9f6b-020f-4ba2-9031-4dfee47f18ab-catalog-content\") pod \"496c9f6b-020f-4ba2-9031-4dfee47f18ab\" (UID: \"496c9f6b-020f-4ba2-9031-4dfee47f18ab\") " Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.233298 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97c38c52-062a-4f94-9992-f944bb0519ee-catalog-content\") pod \"97c38c52-062a-4f94-9992-f944bb0519ee\" (UID: \"97c38c52-062a-4f94-9992-f944bb0519ee\") " Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.233336 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a524d92-a1c1-4494-b487-ba0df0e6a1ec-utilities\") pod \"9a524d92-a1c1-4494-b487-ba0df0e6a1ec\" (UID: \"9a524d92-a1c1-4494-b487-ba0df0e6a1ec\") " Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.233367 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69fcp\" (UniqueName: \"kubernetes.io/projected/3aed04d0-4166-4ed3-bf2b-39e9598d0160-kube-api-access-69fcp\") pod \"3aed04d0-4166-4ed3-bf2b-39e9598d0160\" (UID: \"3aed04d0-4166-4ed3-bf2b-39e9598d0160\") " Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.233501 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3aed04d0-4166-4ed3-bf2b-39e9598d0160-marketplace-trusted-ca\") pod \"3aed04d0-4166-4ed3-bf2b-39e9598d0160\" (UID: \"3aed04d0-4166-4ed3-bf2b-39e9598d0160\") " Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.233526 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ww2s9\" (UniqueName: \"kubernetes.io/projected/68eca474-5187-41ca-b67f-cb316a4ab410-kube-api-access-ww2s9\") pod \"68eca474-5187-41ca-b67f-cb316a4ab410\" (UID: \"68eca474-5187-41ca-b67f-cb316a4ab410\") " Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.233704 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3aed04d0-4166-4ed3-bf2b-39e9598d0160-marketplace-operator-metrics\") pod \"3aed04d0-4166-4ed3-bf2b-39e9598d0160\" (UID: \"3aed04d0-4166-4ed3-bf2b-39e9598d0160\") " Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.233741 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8f228\" (UniqueName: \"kubernetes.io/projected/97c38c52-062a-4f94-9992-f944bb0519ee-kube-api-access-8f228\") pod \"97c38c52-062a-4f94-9992-f944bb0519ee\" (UID: \"97c38c52-062a-4f94-9992-f944bb0519ee\") " Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.233796 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a524d92-a1c1-4494-b487-ba0df0e6a1ec-catalog-content\") pod \"9a524d92-a1c1-4494-b487-ba0df0e6a1ec\" (UID: \"9a524d92-a1c1-4494-b487-ba0df0e6a1ec\") " Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.233856 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf744\" (UniqueName: \"kubernetes.io/projected/496c9f6b-020f-4ba2-9031-4dfee47f18ab-kube-api-access-bf744\") pod \"496c9f6b-020f-4ba2-9031-4dfee47f18ab\" (UID: \"496c9f6b-020f-4ba2-9031-4dfee47f18ab\") " Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.233892 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbrtg\" (UniqueName: \"kubernetes.io/projected/9a524d92-a1c1-4494-b487-ba0df0e6a1ec-kube-api-access-lbrtg\") pod \"9a524d92-a1c1-4494-b487-ba0df0e6a1ec\" (UID: \"9a524d92-a1c1-4494-b487-ba0df0e6a1ec\") " Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.233916 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68eca474-5187-41ca-b67f-cb316a4ab410-catalog-content\") pod \"68eca474-5187-41ca-b67f-cb316a4ab410\" (UID: \"68eca474-5187-41ca-b67f-cb316a4ab410\") " Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.233958 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/496c9f6b-020f-4ba2-9031-4dfee47f18ab-utilities\") pod \"496c9f6b-020f-4ba2-9031-4dfee47f18ab\" (UID: \"496c9f6b-020f-4ba2-9031-4dfee47f18ab\") " Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.240788 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/496c9f6b-020f-4ba2-9031-4dfee47f18ab-utilities" (OuterVolumeSpecName: "utilities") pod "496c9f6b-020f-4ba2-9031-4dfee47f18ab" (UID: "496c9f6b-020f-4ba2-9031-4dfee47f18ab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.243084 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97c38c52-062a-4f94-9992-f944bb0519ee-utilities" (OuterVolumeSpecName: "utilities") pod "97c38c52-062a-4f94-9992-f944bb0519ee" (UID: "97c38c52-062a-4f94-9992-f944bb0519ee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.243383 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68eca474-5187-41ca-b67f-cb316a4ab410-utilities" (OuterVolumeSpecName: "utilities") pod "68eca474-5187-41ca-b67f-cb316a4ab410" (UID: "68eca474-5187-41ca-b67f-cb316a4ab410"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.243613 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a524d92-a1c1-4494-b487-ba0df0e6a1ec-utilities" (OuterVolumeSpecName: "utilities") pod "9a524d92-a1c1-4494-b487-ba0df0e6a1ec" (UID: "9a524d92-a1c1-4494-b487-ba0df0e6a1ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.243716 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3aed04d0-4166-4ed3-bf2b-39e9598d0160-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "3aed04d0-4166-4ed3-bf2b-39e9598d0160" (UID: "3aed04d0-4166-4ed3-bf2b-39e9598d0160"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.246673 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97c38c52-062a-4f94-9992-f944bb0519ee-kube-api-access-8f228" (OuterVolumeSpecName: "kube-api-access-8f228") pod "97c38c52-062a-4f94-9992-f944bb0519ee" (UID: "97c38c52-062a-4f94-9992-f944bb0519ee"). InnerVolumeSpecName "kube-api-access-8f228". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.247276 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68eca474-5187-41ca-b67f-cb316a4ab410-kube-api-access-ww2s9" (OuterVolumeSpecName: "kube-api-access-ww2s9") pod "68eca474-5187-41ca-b67f-cb316a4ab410" (UID: "68eca474-5187-41ca-b67f-cb316a4ab410"). InnerVolumeSpecName "kube-api-access-ww2s9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.247535 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aed04d0-4166-4ed3-bf2b-39e9598d0160-kube-api-access-69fcp" (OuterVolumeSpecName: "kube-api-access-69fcp") pod "3aed04d0-4166-4ed3-bf2b-39e9598d0160" (UID: "3aed04d0-4166-4ed3-bf2b-39e9598d0160"). InnerVolumeSpecName "kube-api-access-69fcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.248379 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aed04d0-4166-4ed3-bf2b-39e9598d0160-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "3aed04d0-4166-4ed3-bf2b-39e9598d0160" (UID: "3aed04d0-4166-4ed3-bf2b-39e9598d0160"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.248562 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496c9f6b-020f-4ba2-9031-4dfee47f18ab-kube-api-access-bf744" (OuterVolumeSpecName: "kube-api-access-bf744") pod "496c9f6b-020f-4ba2-9031-4dfee47f18ab" (UID: "496c9f6b-020f-4ba2-9031-4dfee47f18ab"). InnerVolumeSpecName "kube-api-access-bf744". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.250642 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a524d92-a1c1-4494-b487-ba0df0e6a1ec-kube-api-access-lbrtg" (OuterVolumeSpecName: "kube-api-access-lbrtg") pod "9a524d92-a1c1-4494-b487-ba0df0e6a1ec" (UID: "9a524d92-a1c1-4494-b487-ba0df0e6a1ec"). InnerVolumeSpecName "kube-api-access-lbrtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.266728 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/496c9f6b-020f-4ba2-9031-4dfee47f18ab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "496c9f6b-020f-4ba2-9031-4dfee47f18ab" (UID: "496c9f6b-020f-4ba2-9031-4dfee47f18ab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.297700 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68eca474-5187-41ca-b67f-cb316a4ab410-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "68eca474-5187-41ca-b67f-cb316a4ab410" (UID: "68eca474-5187-41ca-b67f-cb316a4ab410"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.305040 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a524d92-a1c1-4494-b487-ba0df0e6a1ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9a524d92-a1c1-4494-b487-ba0df0e6a1ec" (UID: "9a524d92-a1c1-4494-b487-ba0df0e6a1ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.330254 4747 generic.go:334] "Generic (PLEG): container finished" podID="496c9f6b-020f-4ba2-9031-4dfee47f18ab" containerID="b993594355de75c6a85a73c13dff97f4d7f7ad7eb0dd3b68fcc4e383a6b457ed" exitCode=0 Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.330349 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxjhc" event={"ID":"496c9f6b-020f-4ba2-9031-4dfee47f18ab","Type":"ContainerDied","Data":"b993594355de75c6a85a73c13dff97f4d7f7ad7eb0dd3b68fcc4e383a6b457ed"} Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.330430 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gxjhc" event={"ID":"496c9f6b-020f-4ba2-9031-4dfee47f18ab","Type":"ContainerDied","Data":"444f70b304b0c10e116c30637516b2577650b85901bcfcfbe4cde53b6dd2b91f"} Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.330366 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gxjhc" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.330500 4747 scope.go:117] "RemoveContainer" containerID="b993594355de75c6a85a73c13dff97f4d7f7ad7eb0dd3b68fcc4e383a6b457ed" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.332424 4747 generic.go:334] "Generic (PLEG): container finished" podID="3aed04d0-4166-4ed3-bf2b-39e9598d0160" containerID="bc0558e63876a6f899739254fec5f7486396a430a3468c432efc2e1e673235bd" exitCode=0 Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.332509 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mvljn" event={"ID":"3aed04d0-4166-4ed3-bf2b-39e9598d0160","Type":"ContainerDied","Data":"bc0558e63876a6f899739254fec5f7486396a430a3468c432efc2e1e673235bd"} Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.332543 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mvljn" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.332548 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mvljn" event={"ID":"3aed04d0-4166-4ed3-bf2b-39e9598d0160","Type":"ContainerDied","Data":"64076be02ac22b84a2d4558478908f21456f92a7df2110dd55d1ec6e9f0d1afb"} Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.335340 4747 generic.go:334] "Generic (PLEG): container finished" podID="97c38c52-062a-4f94-9992-f944bb0519ee" containerID="a55b037d1361bf6bdeb116e53646d89dd20ba8c32333ecf135a3cbfff6a69724" exitCode=0 Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.335400 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hhfhz" event={"ID":"97c38c52-062a-4f94-9992-f944bb0519ee","Type":"ContainerDied","Data":"a55b037d1361bf6bdeb116e53646d89dd20ba8c32333ecf135a3cbfff6a69724"} Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.335417 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hhfhz" event={"ID":"97c38c52-062a-4f94-9992-f944bb0519ee","Type":"ContainerDied","Data":"af137d78b247057d75652eaaef9d3682d2f68a85a7fa97ea2a6b813885112a5e"} Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.335528 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hhfhz" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.339028 4747 generic.go:334] "Generic (PLEG): container finished" podID="9a524d92-a1c1-4494-b487-ba0df0e6a1ec" containerID="e45fe7d5f93326148303da0b04cf5d9ee9bdcdd1dd56d6b397cf9d7ac983ede3" exitCode=0 Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.339081 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f66lm" event={"ID":"9a524d92-a1c1-4494-b487-ba0df0e6a1ec","Type":"ContainerDied","Data":"e45fe7d5f93326148303da0b04cf5d9ee9bdcdd1dd56d6b397cf9d7ac983ede3"} Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.339124 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f66lm" event={"ID":"9a524d92-a1c1-4494-b487-ba0df0e6a1ec","Type":"ContainerDied","Data":"06d9d8b64d56d2946e2721376c9d7e5879e7467d9e357720a405854988a02675"} Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.339217 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f66lm" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.341629 4747 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a524d92-a1c1-4494-b487-ba0df0e6a1ec-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.341669 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf744\" (UniqueName: \"kubernetes.io/projected/496c9f6b-020f-4ba2-9031-4dfee47f18ab-kube-api-access-bf744\") on node \"crc\" DevicePath \"\"" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.341683 4747 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68eca474-5187-41ca-b67f-cb316a4ab410-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.341699 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbrtg\" (UniqueName: \"kubernetes.io/projected/9a524d92-a1c1-4494-b487-ba0df0e6a1ec-kube-api-access-lbrtg\") on node \"crc\" DevicePath \"\"" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.341711 4747 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/496c9f6b-020f-4ba2-9031-4dfee47f18ab-utilities\") on node \"crc\" DevicePath \"\"" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.341723 4747 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97c38c52-062a-4f94-9992-f944bb0519ee-utilities\") on node \"crc\" DevicePath \"\"" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.341733 4747 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68eca474-5187-41ca-b67f-cb316a4ab410-utilities\") on node \"crc\" DevicePath \"\"" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.341742 4747 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/496c9f6b-020f-4ba2-9031-4dfee47f18ab-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.341752 4747 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a524d92-a1c1-4494-b487-ba0df0e6a1ec-utilities\") on node \"crc\" DevicePath \"\"" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.341761 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69fcp\" (UniqueName: \"kubernetes.io/projected/3aed04d0-4166-4ed3-bf2b-39e9598d0160-kube-api-access-69fcp\") on node \"crc\" DevicePath \"\"" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.341770 4747 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3aed04d0-4166-4ed3-bf2b-39e9598d0160-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.341779 4747 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3aed04d0-4166-4ed3-bf2b-39e9598d0160-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.341789 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ww2s9\" (UniqueName: \"kubernetes.io/projected/68eca474-5187-41ca-b67f-cb316a4ab410-kube-api-access-ww2s9\") on node \"crc\" DevicePath \"\"" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.341798 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8f228\" (UniqueName: \"kubernetes.io/projected/97c38c52-062a-4f94-9992-f944bb0519ee-kube-api-access-8f228\") on node \"crc\" DevicePath \"\"" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.345383 4747 generic.go:334] "Generic (PLEG): container finished" podID="68eca474-5187-41ca-b67f-cb316a4ab410" containerID="e5a5c4d573912ab05e31be526d1bcc2fab630bc4d43f092db81c43e6d63448ab" exitCode=0 Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.345439 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zrdj" event={"ID":"68eca474-5187-41ca-b67f-cb316a4ab410","Type":"ContainerDied","Data":"e5a5c4d573912ab05e31be526d1bcc2fab630bc4d43f092db81c43e6d63448ab"} Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.345475 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zrdj" event={"ID":"68eca474-5187-41ca-b67f-cb316a4ab410","Type":"ContainerDied","Data":"bbe82d7b71b93438d6c3468e63fe74caf4d71e62a08283b21fc7daca07ef72a6"} Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.345586 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6zrdj" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.351726 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97c38c52-062a-4f94-9992-f944bb0519ee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "97c38c52-062a-4f94-9992-f944bb0519ee" (UID: "97c38c52-062a-4f94-9992-f944bb0519ee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.353338 4747 scope.go:117] "RemoveContainer" containerID="d4b046f1ff16694b9ea04214dd8bfc9a280c1a011e3bcbc0de9827e687ab84ca" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.375099 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mvljn"] Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.380920 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mvljn"] Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.383359 4747 scope.go:117] "RemoveContainer" containerID="713e0c3eaf339ff962703d6c50670684e698409b50abf5a06b118fa89ae1881a" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.385866 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qr5lt"] Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.394074 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gxjhc"] Dec 15 05:40:46 crc kubenswrapper[4747]: W1215 05:40:46.394915 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf22206aa_87c5_4c96_b146_53b0890697fa.slice/crio-0814339bc01eb65c5678a371175f6dc1c4d7c088d99b3dbc7a051cdecf2ddeee WatchSource:0}: Error finding container 0814339bc01eb65c5678a371175f6dc1c4d7c088d99b3dbc7a051cdecf2ddeee: Status 404 returned error can't find the container with id 0814339bc01eb65c5678a371175f6dc1c4d7c088d99b3dbc7a051cdecf2ddeee Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.398149 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gxjhc"] Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.401386 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6zrdj"] Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.407006 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6zrdj"] Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.412424 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f66lm"] Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.412608 4747 scope.go:117] "RemoveContainer" containerID="b993594355de75c6a85a73c13dff97f4d7f7ad7eb0dd3b68fcc4e383a6b457ed" Dec 15 05:40:46 crc kubenswrapper[4747]: E1215 05:40:46.413065 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b993594355de75c6a85a73c13dff97f4d7f7ad7eb0dd3b68fcc4e383a6b457ed\": container with ID starting with b993594355de75c6a85a73c13dff97f4d7f7ad7eb0dd3b68fcc4e383a6b457ed not found: ID does not exist" containerID="b993594355de75c6a85a73c13dff97f4d7f7ad7eb0dd3b68fcc4e383a6b457ed" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.413121 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b993594355de75c6a85a73c13dff97f4d7f7ad7eb0dd3b68fcc4e383a6b457ed"} err="failed to get container status \"b993594355de75c6a85a73c13dff97f4d7f7ad7eb0dd3b68fcc4e383a6b457ed\": rpc error: code = NotFound desc = could not find container \"b993594355de75c6a85a73c13dff97f4d7f7ad7eb0dd3b68fcc4e383a6b457ed\": container with ID starting with b993594355de75c6a85a73c13dff97f4d7f7ad7eb0dd3b68fcc4e383a6b457ed not found: ID does not exist" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.413168 4747 scope.go:117] "RemoveContainer" containerID="d4b046f1ff16694b9ea04214dd8bfc9a280c1a011e3bcbc0de9827e687ab84ca" Dec 15 05:40:46 crc kubenswrapper[4747]: E1215 05:40:46.413789 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4b046f1ff16694b9ea04214dd8bfc9a280c1a011e3bcbc0de9827e687ab84ca\": container with ID starting with d4b046f1ff16694b9ea04214dd8bfc9a280c1a011e3bcbc0de9827e687ab84ca not found: ID does not exist" containerID="d4b046f1ff16694b9ea04214dd8bfc9a280c1a011e3bcbc0de9827e687ab84ca" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.413832 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4b046f1ff16694b9ea04214dd8bfc9a280c1a011e3bcbc0de9827e687ab84ca"} err="failed to get container status \"d4b046f1ff16694b9ea04214dd8bfc9a280c1a011e3bcbc0de9827e687ab84ca\": rpc error: code = NotFound desc = could not find container \"d4b046f1ff16694b9ea04214dd8bfc9a280c1a011e3bcbc0de9827e687ab84ca\": container with ID starting with d4b046f1ff16694b9ea04214dd8bfc9a280c1a011e3bcbc0de9827e687ab84ca not found: ID does not exist" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.413869 4747 scope.go:117] "RemoveContainer" containerID="713e0c3eaf339ff962703d6c50670684e698409b50abf5a06b118fa89ae1881a" Dec 15 05:40:46 crc kubenswrapper[4747]: E1215 05:40:46.415593 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"713e0c3eaf339ff962703d6c50670684e698409b50abf5a06b118fa89ae1881a\": container with ID starting with 713e0c3eaf339ff962703d6c50670684e698409b50abf5a06b118fa89ae1881a not found: ID does not exist" containerID="713e0c3eaf339ff962703d6c50670684e698409b50abf5a06b118fa89ae1881a" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.415634 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"713e0c3eaf339ff962703d6c50670684e698409b50abf5a06b118fa89ae1881a"} err="failed to get container status \"713e0c3eaf339ff962703d6c50670684e698409b50abf5a06b118fa89ae1881a\": rpc error: code = NotFound desc = could not find container \"713e0c3eaf339ff962703d6c50670684e698409b50abf5a06b118fa89ae1881a\": container with ID starting with 713e0c3eaf339ff962703d6c50670684e698409b50abf5a06b118fa89ae1881a not found: ID does not exist" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.415655 4747 scope.go:117] "RemoveContainer" containerID="bc0558e63876a6f899739254fec5f7486396a430a3468c432efc2e1e673235bd" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.425978 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-f66lm"] Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.435687 4747 scope.go:117] "RemoveContainer" containerID="bc0558e63876a6f899739254fec5f7486396a430a3468c432efc2e1e673235bd" Dec 15 05:40:46 crc kubenswrapper[4747]: E1215 05:40:46.436075 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc0558e63876a6f899739254fec5f7486396a430a3468c432efc2e1e673235bd\": container with ID starting with bc0558e63876a6f899739254fec5f7486396a430a3468c432efc2e1e673235bd not found: ID does not exist" containerID="bc0558e63876a6f899739254fec5f7486396a430a3468c432efc2e1e673235bd" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.436113 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc0558e63876a6f899739254fec5f7486396a430a3468c432efc2e1e673235bd"} err="failed to get container status \"bc0558e63876a6f899739254fec5f7486396a430a3468c432efc2e1e673235bd\": rpc error: code = NotFound desc = could not find container \"bc0558e63876a6f899739254fec5f7486396a430a3468c432efc2e1e673235bd\": container with ID starting with bc0558e63876a6f899739254fec5f7486396a430a3468c432efc2e1e673235bd not found: ID does not exist" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.436142 4747 scope.go:117] "RemoveContainer" containerID="a55b037d1361bf6bdeb116e53646d89dd20ba8c32333ecf135a3cbfff6a69724" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.442995 4747 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97c38c52-062a-4f94-9992-f944bb0519ee-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.448275 4747 scope.go:117] "RemoveContainer" containerID="fe86caebfc55bcc4760020b118e3f8649e759a7f0e83ef55fdb5896204e4dd50" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.476318 4747 scope.go:117] "RemoveContainer" containerID="c2676ffce716097e93af38f66714dc6a4bdd1c307b5e4bd219e8c547dc0f9b2e" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.493350 4747 scope.go:117] "RemoveContainer" containerID="a55b037d1361bf6bdeb116e53646d89dd20ba8c32333ecf135a3cbfff6a69724" Dec 15 05:40:46 crc kubenswrapper[4747]: E1215 05:40:46.493833 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a55b037d1361bf6bdeb116e53646d89dd20ba8c32333ecf135a3cbfff6a69724\": container with ID starting with a55b037d1361bf6bdeb116e53646d89dd20ba8c32333ecf135a3cbfff6a69724 not found: ID does not exist" containerID="a55b037d1361bf6bdeb116e53646d89dd20ba8c32333ecf135a3cbfff6a69724" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.493903 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a55b037d1361bf6bdeb116e53646d89dd20ba8c32333ecf135a3cbfff6a69724"} err="failed to get container status \"a55b037d1361bf6bdeb116e53646d89dd20ba8c32333ecf135a3cbfff6a69724\": rpc error: code = NotFound desc = could not find container \"a55b037d1361bf6bdeb116e53646d89dd20ba8c32333ecf135a3cbfff6a69724\": container with ID starting with a55b037d1361bf6bdeb116e53646d89dd20ba8c32333ecf135a3cbfff6a69724 not found: ID does not exist" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.493971 4747 scope.go:117] "RemoveContainer" containerID="fe86caebfc55bcc4760020b118e3f8649e759a7f0e83ef55fdb5896204e4dd50" Dec 15 05:40:46 crc kubenswrapper[4747]: E1215 05:40:46.494373 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe86caebfc55bcc4760020b118e3f8649e759a7f0e83ef55fdb5896204e4dd50\": container with ID starting with fe86caebfc55bcc4760020b118e3f8649e759a7f0e83ef55fdb5896204e4dd50 not found: ID does not exist" containerID="fe86caebfc55bcc4760020b118e3f8649e759a7f0e83ef55fdb5896204e4dd50" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.494421 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe86caebfc55bcc4760020b118e3f8649e759a7f0e83ef55fdb5896204e4dd50"} err="failed to get container status \"fe86caebfc55bcc4760020b118e3f8649e759a7f0e83ef55fdb5896204e4dd50\": rpc error: code = NotFound desc = could not find container \"fe86caebfc55bcc4760020b118e3f8649e759a7f0e83ef55fdb5896204e4dd50\": container with ID starting with fe86caebfc55bcc4760020b118e3f8649e759a7f0e83ef55fdb5896204e4dd50 not found: ID does not exist" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.494446 4747 scope.go:117] "RemoveContainer" containerID="c2676ffce716097e93af38f66714dc6a4bdd1c307b5e4bd219e8c547dc0f9b2e" Dec 15 05:40:46 crc kubenswrapper[4747]: E1215 05:40:46.494737 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2676ffce716097e93af38f66714dc6a4bdd1c307b5e4bd219e8c547dc0f9b2e\": container with ID starting with c2676ffce716097e93af38f66714dc6a4bdd1c307b5e4bd219e8c547dc0f9b2e not found: ID does not exist" containerID="c2676ffce716097e93af38f66714dc6a4bdd1c307b5e4bd219e8c547dc0f9b2e" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.494776 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2676ffce716097e93af38f66714dc6a4bdd1c307b5e4bd219e8c547dc0f9b2e"} err="failed to get container status \"c2676ffce716097e93af38f66714dc6a4bdd1c307b5e4bd219e8c547dc0f9b2e\": rpc error: code = NotFound desc = could not find container \"c2676ffce716097e93af38f66714dc6a4bdd1c307b5e4bd219e8c547dc0f9b2e\": container with ID starting with c2676ffce716097e93af38f66714dc6a4bdd1c307b5e4bd219e8c547dc0f9b2e not found: ID does not exist" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.494802 4747 scope.go:117] "RemoveContainer" containerID="e45fe7d5f93326148303da0b04cf5d9ee9bdcdd1dd56d6b397cf9d7ac983ede3" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.507385 4747 scope.go:117] "RemoveContainer" containerID="5858e4871934c59c25ec16c559ae584e7c4ddf029520ef607e2de41fab4d16df" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.525195 4747 scope.go:117] "RemoveContainer" containerID="6c41ed4198b50c7d1a59c743cc2e5a5b924568dbd7e1533487bdd81f2b69b307" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.538933 4747 scope.go:117] "RemoveContainer" containerID="e45fe7d5f93326148303da0b04cf5d9ee9bdcdd1dd56d6b397cf9d7ac983ede3" Dec 15 05:40:46 crc kubenswrapper[4747]: E1215 05:40:46.539220 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e45fe7d5f93326148303da0b04cf5d9ee9bdcdd1dd56d6b397cf9d7ac983ede3\": container with ID starting with e45fe7d5f93326148303da0b04cf5d9ee9bdcdd1dd56d6b397cf9d7ac983ede3 not found: ID does not exist" containerID="e45fe7d5f93326148303da0b04cf5d9ee9bdcdd1dd56d6b397cf9d7ac983ede3" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.539252 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e45fe7d5f93326148303da0b04cf5d9ee9bdcdd1dd56d6b397cf9d7ac983ede3"} err="failed to get container status \"e45fe7d5f93326148303da0b04cf5d9ee9bdcdd1dd56d6b397cf9d7ac983ede3\": rpc error: code = NotFound desc = could not find container \"e45fe7d5f93326148303da0b04cf5d9ee9bdcdd1dd56d6b397cf9d7ac983ede3\": container with ID starting with e45fe7d5f93326148303da0b04cf5d9ee9bdcdd1dd56d6b397cf9d7ac983ede3 not found: ID does not exist" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.539276 4747 scope.go:117] "RemoveContainer" containerID="5858e4871934c59c25ec16c559ae584e7c4ddf029520ef607e2de41fab4d16df" Dec 15 05:40:46 crc kubenswrapper[4747]: E1215 05:40:46.539542 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5858e4871934c59c25ec16c559ae584e7c4ddf029520ef607e2de41fab4d16df\": container with ID starting with 5858e4871934c59c25ec16c559ae584e7c4ddf029520ef607e2de41fab4d16df not found: ID does not exist" containerID="5858e4871934c59c25ec16c559ae584e7c4ddf029520ef607e2de41fab4d16df" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.539567 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5858e4871934c59c25ec16c559ae584e7c4ddf029520ef607e2de41fab4d16df"} err="failed to get container status \"5858e4871934c59c25ec16c559ae584e7c4ddf029520ef607e2de41fab4d16df\": rpc error: code = NotFound desc = could not find container \"5858e4871934c59c25ec16c559ae584e7c4ddf029520ef607e2de41fab4d16df\": container with ID starting with 5858e4871934c59c25ec16c559ae584e7c4ddf029520ef607e2de41fab4d16df not found: ID does not exist" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.539584 4747 scope.go:117] "RemoveContainer" containerID="6c41ed4198b50c7d1a59c743cc2e5a5b924568dbd7e1533487bdd81f2b69b307" Dec 15 05:40:46 crc kubenswrapper[4747]: E1215 05:40:46.539813 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c41ed4198b50c7d1a59c743cc2e5a5b924568dbd7e1533487bdd81f2b69b307\": container with ID starting with 6c41ed4198b50c7d1a59c743cc2e5a5b924568dbd7e1533487bdd81f2b69b307 not found: ID does not exist" containerID="6c41ed4198b50c7d1a59c743cc2e5a5b924568dbd7e1533487bdd81f2b69b307" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.539836 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c41ed4198b50c7d1a59c743cc2e5a5b924568dbd7e1533487bdd81f2b69b307"} err="failed to get container status \"6c41ed4198b50c7d1a59c743cc2e5a5b924568dbd7e1533487bdd81f2b69b307\": rpc error: code = NotFound desc = could not find container \"6c41ed4198b50c7d1a59c743cc2e5a5b924568dbd7e1533487bdd81f2b69b307\": container with ID starting with 6c41ed4198b50c7d1a59c743cc2e5a5b924568dbd7e1533487bdd81f2b69b307 not found: ID does not exist" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.539853 4747 scope.go:117] "RemoveContainer" containerID="e5a5c4d573912ab05e31be526d1bcc2fab630bc4d43f092db81c43e6d63448ab" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.550232 4747 scope.go:117] "RemoveContainer" containerID="2c8c00ae211fd6e8bde8305f0f2fab031c657c4f6cd78a5b84b2dc426ba60b0d" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.563576 4747 scope.go:117] "RemoveContainer" containerID="02fbc7db47723434454501ab28c28875ec5d5edef3ae74efd55e41e23d579bdb" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.578418 4747 scope.go:117] "RemoveContainer" containerID="e5a5c4d573912ab05e31be526d1bcc2fab630bc4d43f092db81c43e6d63448ab" Dec 15 05:40:46 crc kubenswrapper[4747]: E1215 05:40:46.578739 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5a5c4d573912ab05e31be526d1bcc2fab630bc4d43f092db81c43e6d63448ab\": container with ID starting with e5a5c4d573912ab05e31be526d1bcc2fab630bc4d43f092db81c43e6d63448ab not found: ID does not exist" containerID="e5a5c4d573912ab05e31be526d1bcc2fab630bc4d43f092db81c43e6d63448ab" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.578768 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5a5c4d573912ab05e31be526d1bcc2fab630bc4d43f092db81c43e6d63448ab"} err="failed to get container status \"e5a5c4d573912ab05e31be526d1bcc2fab630bc4d43f092db81c43e6d63448ab\": rpc error: code = NotFound desc = could not find container \"e5a5c4d573912ab05e31be526d1bcc2fab630bc4d43f092db81c43e6d63448ab\": container with ID starting with e5a5c4d573912ab05e31be526d1bcc2fab630bc4d43f092db81c43e6d63448ab not found: ID does not exist" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.578788 4747 scope.go:117] "RemoveContainer" containerID="2c8c00ae211fd6e8bde8305f0f2fab031c657c4f6cd78a5b84b2dc426ba60b0d" Dec 15 05:40:46 crc kubenswrapper[4747]: E1215 05:40:46.579219 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c8c00ae211fd6e8bde8305f0f2fab031c657c4f6cd78a5b84b2dc426ba60b0d\": container with ID starting with 2c8c00ae211fd6e8bde8305f0f2fab031c657c4f6cd78a5b84b2dc426ba60b0d not found: ID does not exist" containerID="2c8c00ae211fd6e8bde8305f0f2fab031c657c4f6cd78a5b84b2dc426ba60b0d" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.579275 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c8c00ae211fd6e8bde8305f0f2fab031c657c4f6cd78a5b84b2dc426ba60b0d"} err="failed to get container status \"2c8c00ae211fd6e8bde8305f0f2fab031c657c4f6cd78a5b84b2dc426ba60b0d\": rpc error: code = NotFound desc = could not find container \"2c8c00ae211fd6e8bde8305f0f2fab031c657c4f6cd78a5b84b2dc426ba60b0d\": container with ID starting with 2c8c00ae211fd6e8bde8305f0f2fab031c657c4f6cd78a5b84b2dc426ba60b0d not found: ID does not exist" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.579304 4747 scope.go:117] "RemoveContainer" containerID="02fbc7db47723434454501ab28c28875ec5d5edef3ae74efd55e41e23d579bdb" Dec 15 05:40:46 crc kubenswrapper[4747]: E1215 05:40:46.579730 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02fbc7db47723434454501ab28c28875ec5d5edef3ae74efd55e41e23d579bdb\": container with ID starting with 02fbc7db47723434454501ab28c28875ec5d5edef3ae74efd55e41e23d579bdb not found: ID does not exist" containerID="02fbc7db47723434454501ab28c28875ec5d5edef3ae74efd55e41e23d579bdb" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.579765 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02fbc7db47723434454501ab28c28875ec5d5edef3ae74efd55e41e23d579bdb"} err="failed to get container status \"02fbc7db47723434454501ab28c28875ec5d5edef3ae74efd55e41e23d579bdb\": rpc error: code = NotFound desc = could not find container \"02fbc7db47723434454501ab28c28875ec5d5edef3ae74efd55e41e23d579bdb\": container with ID starting with 02fbc7db47723434454501ab28c28875ec5d5edef3ae74efd55e41e23d579bdb not found: ID does not exist" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.636140 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3aed04d0-4166-4ed3-bf2b-39e9598d0160" path="/var/lib/kubelet/pods/3aed04d0-4166-4ed3-bf2b-39e9598d0160/volumes" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.637801 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496c9f6b-020f-4ba2-9031-4dfee47f18ab" path="/var/lib/kubelet/pods/496c9f6b-020f-4ba2-9031-4dfee47f18ab/volumes" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.638491 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68eca474-5187-41ca-b67f-cb316a4ab410" path="/var/lib/kubelet/pods/68eca474-5187-41ca-b67f-cb316a4ab410/volumes" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.640330 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a524d92-a1c1-4494-b487-ba0df0e6a1ec" path="/var/lib/kubelet/pods/9a524d92-a1c1-4494-b487-ba0df0e6a1ec/volumes" Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.669445 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hhfhz"] Dec 15 05:40:46 crc kubenswrapper[4747]: I1215 05:40:46.676321 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hhfhz"] Dec 15 05:40:47 crc kubenswrapper[4747]: I1215 05:40:47.354240 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qr5lt" event={"ID":"f22206aa-87c5-4c96-b146-53b0890697fa","Type":"ContainerStarted","Data":"cf927490174d0bbf2ca29cb7ab81c998594c7712d718c44dd84160d7d8524938"} Dec 15 05:40:47 crc kubenswrapper[4747]: I1215 05:40:47.354638 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qr5lt" event={"ID":"f22206aa-87c5-4c96-b146-53b0890697fa","Type":"ContainerStarted","Data":"0814339bc01eb65c5678a371175f6dc1c4d7c088d99b3dbc7a051cdecf2ddeee"} Dec 15 05:40:47 crc kubenswrapper[4747]: I1215 05:40:47.355018 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-qr5lt" Dec 15 05:40:47 crc kubenswrapper[4747]: I1215 05:40:47.359322 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-qr5lt" Dec 15 05:40:47 crc kubenswrapper[4747]: I1215 05:40:47.391328 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-qr5lt" podStartSLOduration=2.391308911 podStartE2EDuration="2.391308911s" podCreationTimestamp="2025-12-15 05:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:40:47.375341799 +0000 UTC m=+211.071853716" watchObservedRunningTime="2025-12-15 05:40:47.391308911 +0000 UTC m=+211.087820828" Dec 15 05:40:47 crc kubenswrapper[4747]: I1215 05:40:47.834669 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-r755m"] Dec 15 05:40:47 crc kubenswrapper[4747]: E1215 05:40:47.835153 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97c38c52-062a-4f94-9992-f944bb0519ee" containerName="extract-utilities" Dec 15 05:40:47 crc kubenswrapper[4747]: I1215 05:40:47.835177 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="97c38c52-062a-4f94-9992-f944bb0519ee" containerName="extract-utilities" Dec 15 05:40:47 crc kubenswrapper[4747]: E1215 05:40:47.835192 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="496c9f6b-020f-4ba2-9031-4dfee47f18ab" containerName="registry-server" Dec 15 05:40:47 crc kubenswrapper[4747]: I1215 05:40:47.835200 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="496c9f6b-020f-4ba2-9031-4dfee47f18ab" containerName="registry-server" Dec 15 05:40:47 crc kubenswrapper[4747]: E1215 05:40:47.835216 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68eca474-5187-41ca-b67f-cb316a4ab410" containerName="extract-utilities" Dec 15 05:40:47 crc kubenswrapper[4747]: I1215 05:40:47.835223 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="68eca474-5187-41ca-b67f-cb316a4ab410" containerName="extract-utilities" Dec 15 05:40:47 crc kubenswrapper[4747]: E1215 05:40:47.835230 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97c38c52-062a-4f94-9992-f944bb0519ee" containerName="extract-content" Dec 15 05:40:47 crc kubenswrapper[4747]: I1215 05:40:47.835236 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="97c38c52-062a-4f94-9992-f944bb0519ee" containerName="extract-content" Dec 15 05:40:47 crc kubenswrapper[4747]: E1215 05:40:47.835247 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aed04d0-4166-4ed3-bf2b-39e9598d0160" containerName="marketplace-operator" Dec 15 05:40:47 crc kubenswrapper[4747]: I1215 05:40:47.835253 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aed04d0-4166-4ed3-bf2b-39e9598d0160" containerName="marketplace-operator" Dec 15 05:40:47 crc kubenswrapper[4747]: E1215 05:40:47.835263 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="496c9f6b-020f-4ba2-9031-4dfee47f18ab" containerName="extract-utilities" Dec 15 05:40:47 crc kubenswrapper[4747]: I1215 05:40:47.835269 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="496c9f6b-020f-4ba2-9031-4dfee47f18ab" containerName="extract-utilities" Dec 15 05:40:47 crc kubenswrapper[4747]: E1215 05:40:47.835276 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68eca474-5187-41ca-b67f-cb316a4ab410" containerName="registry-server" Dec 15 05:40:47 crc kubenswrapper[4747]: I1215 05:40:47.835285 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="68eca474-5187-41ca-b67f-cb316a4ab410" containerName="registry-server" Dec 15 05:40:47 crc kubenswrapper[4747]: E1215 05:40:47.835335 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="496c9f6b-020f-4ba2-9031-4dfee47f18ab" containerName="extract-content" Dec 15 05:40:47 crc kubenswrapper[4747]: I1215 05:40:47.835343 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="496c9f6b-020f-4ba2-9031-4dfee47f18ab" containerName="extract-content" Dec 15 05:40:47 crc kubenswrapper[4747]: E1215 05:40:47.835351 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a524d92-a1c1-4494-b487-ba0df0e6a1ec" containerName="registry-server" Dec 15 05:40:47 crc kubenswrapper[4747]: I1215 05:40:47.835359 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a524d92-a1c1-4494-b487-ba0df0e6a1ec" containerName="registry-server" Dec 15 05:40:47 crc kubenswrapper[4747]: E1215 05:40:47.835368 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a524d92-a1c1-4494-b487-ba0df0e6a1ec" containerName="extract-utilities" Dec 15 05:40:47 crc kubenswrapper[4747]: I1215 05:40:47.835378 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a524d92-a1c1-4494-b487-ba0df0e6a1ec" containerName="extract-utilities" Dec 15 05:40:47 crc kubenswrapper[4747]: E1215 05:40:47.835389 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97c38c52-062a-4f94-9992-f944bb0519ee" containerName="registry-server" Dec 15 05:40:47 crc kubenswrapper[4747]: I1215 05:40:47.835395 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="97c38c52-062a-4f94-9992-f944bb0519ee" containerName="registry-server" Dec 15 05:40:47 crc kubenswrapper[4747]: E1215 05:40:47.835404 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68eca474-5187-41ca-b67f-cb316a4ab410" containerName="extract-content" Dec 15 05:40:47 crc kubenswrapper[4747]: I1215 05:40:47.835411 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="68eca474-5187-41ca-b67f-cb316a4ab410" containerName="extract-content" Dec 15 05:40:47 crc kubenswrapper[4747]: E1215 05:40:47.835421 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a524d92-a1c1-4494-b487-ba0df0e6a1ec" containerName="extract-content" Dec 15 05:40:47 crc kubenswrapper[4747]: I1215 05:40:47.835466 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a524d92-a1c1-4494-b487-ba0df0e6a1ec" containerName="extract-content" Dec 15 05:40:47 crc kubenswrapper[4747]: I1215 05:40:47.835688 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a524d92-a1c1-4494-b487-ba0df0e6a1ec" containerName="registry-server" Dec 15 05:40:47 crc kubenswrapper[4747]: I1215 05:40:47.835701 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aed04d0-4166-4ed3-bf2b-39e9598d0160" containerName="marketplace-operator" Dec 15 05:40:47 crc kubenswrapper[4747]: I1215 05:40:47.835710 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="496c9f6b-020f-4ba2-9031-4dfee47f18ab" containerName="registry-server" Dec 15 05:40:47 crc kubenswrapper[4747]: I1215 05:40:47.835721 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="68eca474-5187-41ca-b67f-cb316a4ab410" containerName="registry-server" Dec 15 05:40:47 crc kubenswrapper[4747]: I1215 05:40:47.835733 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="97c38c52-062a-4f94-9992-f944bb0519ee" containerName="registry-server" Dec 15 05:40:47 crc kubenswrapper[4747]: I1215 05:40:47.836824 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r755m" Dec 15 05:40:47 crc kubenswrapper[4747]: I1215 05:40:47.839054 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 15 05:40:47 crc kubenswrapper[4747]: I1215 05:40:47.839330 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r755m"] Dec 15 05:40:47 crc kubenswrapper[4747]: I1215 05:40:47.862185 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njbzz\" (UniqueName: \"kubernetes.io/projected/efb301de-15d1-452a-b8e9-10296872545b-kube-api-access-njbzz\") pod \"redhat-marketplace-r755m\" (UID: \"efb301de-15d1-452a-b8e9-10296872545b\") " pod="openshift-marketplace/redhat-marketplace-r755m" Dec 15 05:40:47 crc kubenswrapper[4747]: I1215 05:40:47.862234 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efb301de-15d1-452a-b8e9-10296872545b-utilities\") pod \"redhat-marketplace-r755m\" (UID: \"efb301de-15d1-452a-b8e9-10296872545b\") " pod="openshift-marketplace/redhat-marketplace-r755m" Dec 15 05:40:47 crc kubenswrapper[4747]: I1215 05:40:47.862454 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efb301de-15d1-452a-b8e9-10296872545b-catalog-content\") pod \"redhat-marketplace-r755m\" (UID: \"efb301de-15d1-452a-b8e9-10296872545b\") " pod="openshift-marketplace/redhat-marketplace-r755m" Dec 15 05:40:47 crc kubenswrapper[4747]: I1215 05:40:47.965035 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njbzz\" (UniqueName: \"kubernetes.io/projected/efb301de-15d1-452a-b8e9-10296872545b-kube-api-access-njbzz\") pod \"redhat-marketplace-r755m\" (UID: \"efb301de-15d1-452a-b8e9-10296872545b\") " pod="openshift-marketplace/redhat-marketplace-r755m" Dec 15 05:40:47 crc kubenswrapper[4747]: I1215 05:40:47.965081 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efb301de-15d1-452a-b8e9-10296872545b-utilities\") pod \"redhat-marketplace-r755m\" (UID: \"efb301de-15d1-452a-b8e9-10296872545b\") " pod="openshift-marketplace/redhat-marketplace-r755m" Dec 15 05:40:47 crc kubenswrapper[4747]: I1215 05:40:47.965127 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efb301de-15d1-452a-b8e9-10296872545b-catalog-content\") pod \"redhat-marketplace-r755m\" (UID: \"efb301de-15d1-452a-b8e9-10296872545b\") " pod="openshift-marketplace/redhat-marketplace-r755m" Dec 15 05:40:47 crc kubenswrapper[4747]: I1215 05:40:47.965507 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efb301de-15d1-452a-b8e9-10296872545b-catalog-content\") pod \"redhat-marketplace-r755m\" (UID: \"efb301de-15d1-452a-b8e9-10296872545b\") " pod="openshift-marketplace/redhat-marketplace-r755m" Dec 15 05:40:47 crc kubenswrapper[4747]: I1215 05:40:47.965842 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efb301de-15d1-452a-b8e9-10296872545b-utilities\") pod \"redhat-marketplace-r755m\" (UID: \"efb301de-15d1-452a-b8e9-10296872545b\") " pod="openshift-marketplace/redhat-marketplace-r755m" Dec 15 05:40:47 crc kubenswrapper[4747]: I1215 05:40:47.984200 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njbzz\" (UniqueName: \"kubernetes.io/projected/efb301de-15d1-452a-b8e9-10296872545b-kube-api-access-njbzz\") pod \"redhat-marketplace-r755m\" (UID: \"efb301de-15d1-452a-b8e9-10296872545b\") " pod="openshift-marketplace/redhat-marketplace-r755m" Dec 15 05:40:48 crc kubenswrapper[4747]: I1215 05:40:48.032891 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vzrzt"] Dec 15 05:40:48 crc kubenswrapper[4747]: I1215 05:40:48.034700 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vzrzt" Dec 15 05:40:48 crc kubenswrapper[4747]: I1215 05:40:48.036972 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 15 05:40:48 crc kubenswrapper[4747]: I1215 05:40:48.041690 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vzrzt"] Dec 15 05:40:48 crc kubenswrapper[4747]: I1215 05:40:48.065634 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s69mr\" (UniqueName: \"kubernetes.io/projected/a54ad897-346d-40bf-8b62-df432709d572-kube-api-access-s69mr\") pod \"certified-operators-vzrzt\" (UID: \"a54ad897-346d-40bf-8b62-df432709d572\") " pod="openshift-marketplace/certified-operators-vzrzt" Dec 15 05:40:48 crc kubenswrapper[4747]: I1215 05:40:48.065703 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a54ad897-346d-40bf-8b62-df432709d572-utilities\") pod \"certified-operators-vzrzt\" (UID: \"a54ad897-346d-40bf-8b62-df432709d572\") " pod="openshift-marketplace/certified-operators-vzrzt" Dec 15 05:40:48 crc kubenswrapper[4747]: I1215 05:40:48.065740 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a54ad897-346d-40bf-8b62-df432709d572-catalog-content\") pod \"certified-operators-vzrzt\" (UID: \"a54ad897-346d-40bf-8b62-df432709d572\") " pod="openshift-marketplace/certified-operators-vzrzt" Dec 15 05:40:48 crc kubenswrapper[4747]: I1215 05:40:48.160811 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r755m" Dec 15 05:40:48 crc kubenswrapper[4747]: I1215 05:40:48.167109 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s69mr\" (UniqueName: \"kubernetes.io/projected/a54ad897-346d-40bf-8b62-df432709d572-kube-api-access-s69mr\") pod \"certified-operators-vzrzt\" (UID: \"a54ad897-346d-40bf-8b62-df432709d572\") " pod="openshift-marketplace/certified-operators-vzrzt" Dec 15 05:40:48 crc kubenswrapper[4747]: I1215 05:40:48.167464 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a54ad897-346d-40bf-8b62-df432709d572-utilities\") pod \"certified-operators-vzrzt\" (UID: \"a54ad897-346d-40bf-8b62-df432709d572\") " pod="openshift-marketplace/certified-operators-vzrzt" Dec 15 05:40:48 crc kubenswrapper[4747]: I1215 05:40:48.167544 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a54ad897-346d-40bf-8b62-df432709d572-catalog-content\") pod \"certified-operators-vzrzt\" (UID: \"a54ad897-346d-40bf-8b62-df432709d572\") " pod="openshift-marketplace/certified-operators-vzrzt" Dec 15 05:40:48 crc kubenswrapper[4747]: I1215 05:40:48.167942 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a54ad897-346d-40bf-8b62-df432709d572-utilities\") pod \"certified-operators-vzrzt\" (UID: \"a54ad897-346d-40bf-8b62-df432709d572\") " pod="openshift-marketplace/certified-operators-vzrzt" Dec 15 05:40:48 crc kubenswrapper[4747]: I1215 05:40:48.167974 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a54ad897-346d-40bf-8b62-df432709d572-catalog-content\") pod \"certified-operators-vzrzt\" (UID: \"a54ad897-346d-40bf-8b62-df432709d572\") " pod="openshift-marketplace/certified-operators-vzrzt" Dec 15 05:40:48 crc kubenswrapper[4747]: I1215 05:40:48.181905 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s69mr\" (UniqueName: \"kubernetes.io/projected/a54ad897-346d-40bf-8b62-df432709d572-kube-api-access-s69mr\") pod \"certified-operators-vzrzt\" (UID: \"a54ad897-346d-40bf-8b62-df432709d572\") " pod="openshift-marketplace/certified-operators-vzrzt" Dec 15 05:40:48 crc kubenswrapper[4747]: I1215 05:40:48.335227 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r755m"] Dec 15 05:40:48 crc kubenswrapper[4747]: W1215 05:40:48.339120 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefb301de_15d1_452a_b8e9_10296872545b.slice/crio-bfec25f769523a6fda0ab6051903b6f911006d8e0cdce7b8a7d8d3b375466f4e WatchSource:0}: Error finding container bfec25f769523a6fda0ab6051903b6f911006d8e0cdce7b8a7d8d3b375466f4e: Status 404 returned error can't find the container with id bfec25f769523a6fda0ab6051903b6f911006d8e0cdce7b8a7d8d3b375466f4e Dec 15 05:40:48 crc kubenswrapper[4747]: I1215 05:40:48.357032 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vzrzt" Dec 15 05:40:48 crc kubenswrapper[4747]: I1215 05:40:48.370121 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r755m" event={"ID":"efb301de-15d1-452a-b8e9-10296872545b","Type":"ContainerStarted","Data":"bfec25f769523a6fda0ab6051903b6f911006d8e0cdce7b8a7d8d3b375466f4e"} Dec 15 05:40:48 crc kubenswrapper[4747]: I1215 05:40:48.639777 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97c38c52-062a-4f94-9992-f944bb0519ee" path="/var/lib/kubelet/pods/97c38c52-062a-4f94-9992-f944bb0519ee/volumes" Dec 15 05:40:48 crc kubenswrapper[4747]: I1215 05:40:48.755460 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vzrzt"] Dec 15 05:40:48 crc kubenswrapper[4747]: W1215 05:40:48.762231 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda54ad897_346d_40bf_8b62_df432709d572.slice/crio-89264038d02af855512c6b748fcc09bbf874cf523f5c23887ba64cb5082b4311 WatchSource:0}: Error finding container 89264038d02af855512c6b748fcc09bbf874cf523f5c23887ba64cb5082b4311: Status 404 returned error can't find the container with id 89264038d02af855512c6b748fcc09bbf874cf523f5c23887ba64cb5082b4311 Dec 15 05:40:49 crc kubenswrapper[4747]: I1215 05:40:49.378530 4747 generic.go:334] "Generic (PLEG): container finished" podID="a54ad897-346d-40bf-8b62-df432709d572" containerID="463496bb01665bfd61704de0ac101a9dff590b78599e57641852ffee793b4683" exitCode=0 Dec 15 05:40:49 crc kubenswrapper[4747]: I1215 05:40:49.378590 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vzrzt" event={"ID":"a54ad897-346d-40bf-8b62-df432709d572","Type":"ContainerDied","Data":"463496bb01665bfd61704de0ac101a9dff590b78599e57641852ffee793b4683"} Dec 15 05:40:49 crc kubenswrapper[4747]: I1215 05:40:49.378673 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vzrzt" event={"ID":"a54ad897-346d-40bf-8b62-df432709d572","Type":"ContainerStarted","Data":"89264038d02af855512c6b748fcc09bbf874cf523f5c23887ba64cb5082b4311"} Dec 15 05:40:49 crc kubenswrapper[4747]: I1215 05:40:49.380860 4747 generic.go:334] "Generic (PLEG): container finished" podID="efb301de-15d1-452a-b8e9-10296872545b" containerID="def9508d71eac6c7aa8bcee2e6d315b8d744bad038b09b5a28e28cb91391ba09" exitCode=0 Dec 15 05:40:49 crc kubenswrapper[4747]: I1215 05:40:49.380947 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r755m" event={"ID":"efb301de-15d1-452a-b8e9-10296872545b","Type":"ContainerDied","Data":"def9508d71eac6c7aa8bcee2e6d315b8d744bad038b09b5a28e28cb91391ba09"} Dec 15 05:40:50 crc kubenswrapper[4747]: I1215 05:40:50.234754 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lbw8g"] Dec 15 05:40:50 crc kubenswrapper[4747]: I1215 05:40:50.237964 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lbw8g" Dec 15 05:40:50 crc kubenswrapper[4747]: I1215 05:40:50.240758 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 15 05:40:50 crc kubenswrapper[4747]: I1215 05:40:50.248502 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lbw8g"] Dec 15 05:40:50 crc kubenswrapper[4747]: I1215 05:40:50.295703 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2652452-9d91-4f09-9422-fa69bed43b9e-catalog-content\") pod \"redhat-operators-lbw8g\" (UID: \"e2652452-9d91-4f09-9422-fa69bed43b9e\") " pod="openshift-marketplace/redhat-operators-lbw8g" Dec 15 05:40:50 crc kubenswrapper[4747]: I1215 05:40:50.295756 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9sxc\" (UniqueName: \"kubernetes.io/projected/e2652452-9d91-4f09-9422-fa69bed43b9e-kube-api-access-j9sxc\") pod \"redhat-operators-lbw8g\" (UID: \"e2652452-9d91-4f09-9422-fa69bed43b9e\") " pod="openshift-marketplace/redhat-operators-lbw8g" Dec 15 05:40:50 crc kubenswrapper[4747]: I1215 05:40:50.295786 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2652452-9d91-4f09-9422-fa69bed43b9e-utilities\") pod \"redhat-operators-lbw8g\" (UID: \"e2652452-9d91-4f09-9422-fa69bed43b9e\") " pod="openshift-marketplace/redhat-operators-lbw8g" Dec 15 05:40:50 crc kubenswrapper[4747]: I1215 05:40:50.389443 4747 generic.go:334] "Generic (PLEG): container finished" podID="efb301de-15d1-452a-b8e9-10296872545b" containerID="d23f0a315ebf5b7d196ce0c07f8a4aa692e95b3f136c1fd5a70bbdf5c4a50677" exitCode=0 Dec 15 05:40:50 crc kubenswrapper[4747]: I1215 05:40:50.389548 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r755m" event={"ID":"efb301de-15d1-452a-b8e9-10296872545b","Type":"ContainerDied","Data":"d23f0a315ebf5b7d196ce0c07f8a4aa692e95b3f136c1fd5a70bbdf5c4a50677"} Dec 15 05:40:50 crc kubenswrapper[4747]: I1215 05:40:50.401632 4747 generic.go:334] "Generic (PLEG): container finished" podID="a54ad897-346d-40bf-8b62-df432709d572" containerID="46d10fd9f370e495258f3b18a956da19fb4b0a66c420e92bf01daaa3170092b3" exitCode=0 Dec 15 05:40:50 crc kubenswrapper[4747]: I1215 05:40:50.401677 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vzrzt" event={"ID":"a54ad897-346d-40bf-8b62-df432709d572","Type":"ContainerDied","Data":"46d10fd9f370e495258f3b18a956da19fb4b0a66c420e92bf01daaa3170092b3"} Dec 15 05:40:50 crc kubenswrapper[4747]: I1215 05:40:50.402132 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2652452-9d91-4f09-9422-fa69bed43b9e-catalog-content\") pod \"redhat-operators-lbw8g\" (UID: \"e2652452-9d91-4f09-9422-fa69bed43b9e\") " pod="openshift-marketplace/redhat-operators-lbw8g" Dec 15 05:40:50 crc kubenswrapper[4747]: I1215 05:40:50.402195 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9sxc\" (UniqueName: \"kubernetes.io/projected/e2652452-9d91-4f09-9422-fa69bed43b9e-kube-api-access-j9sxc\") pod \"redhat-operators-lbw8g\" (UID: \"e2652452-9d91-4f09-9422-fa69bed43b9e\") " pod="openshift-marketplace/redhat-operators-lbw8g" Dec 15 05:40:50 crc kubenswrapper[4747]: I1215 05:40:50.402240 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2652452-9d91-4f09-9422-fa69bed43b9e-utilities\") pod \"redhat-operators-lbw8g\" (UID: \"e2652452-9d91-4f09-9422-fa69bed43b9e\") " pod="openshift-marketplace/redhat-operators-lbw8g" Dec 15 05:40:50 crc kubenswrapper[4747]: I1215 05:40:50.402732 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2652452-9d91-4f09-9422-fa69bed43b9e-utilities\") pod \"redhat-operators-lbw8g\" (UID: \"e2652452-9d91-4f09-9422-fa69bed43b9e\") " pod="openshift-marketplace/redhat-operators-lbw8g" Dec 15 05:40:50 crc kubenswrapper[4747]: I1215 05:40:50.406805 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2652452-9d91-4f09-9422-fa69bed43b9e-catalog-content\") pod \"redhat-operators-lbw8g\" (UID: \"e2652452-9d91-4f09-9422-fa69bed43b9e\") " pod="openshift-marketplace/redhat-operators-lbw8g" Dec 15 05:40:50 crc kubenswrapper[4747]: I1215 05:40:50.428315 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dnrjw"] Dec 15 05:40:50 crc kubenswrapper[4747]: I1215 05:40:50.430706 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dnrjw" Dec 15 05:40:50 crc kubenswrapper[4747]: I1215 05:40:50.433136 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9sxc\" (UniqueName: \"kubernetes.io/projected/e2652452-9d91-4f09-9422-fa69bed43b9e-kube-api-access-j9sxc\") pod \"redhat-operators-lbw8g\" (UID: \"e2652452-9d91-4f09-9422-fa69bed43b9e\") " pod="openshift-marketplace/redhat-operators-lbw8g" Dec 15 05:40:50 crc kubenswrapper[4747]: I1215 05:40:50.433378 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 15 05:40:50 crc kubenswrapper[4747]: I1215 05:40:50.436127 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dnrjw"] Dec 15 05:40:50 crc kubenswrapper[4747]: I1215 05:40:50.502684 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxp7t\" (UniqueName: \"kubernetes.io/projected/16bd4ac3-acf8-400e-9413-fed487146d2f-kube-api-access-nxp7t\") pod \"community-operators-dnrjw\" (UID: \"16bd4ac3-acf8-400e-9413-fed487146d2f\") " pod="openshift-marketplace/community-operators-dnrjw" Dec 15 05:40:50 crc kubenswrapper[4747]: I1215 05:40:50.502792 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16bd4ac3-acf8-400e-9413-fed487146d2f-catalog-content\") pod \"community-operators-dnrjw\" (UID: \"16bd4ac3-acf8-400e-9413-fed487146d2f\") " pod="openshift-marketplace/community-operators-dnrjw" Dec 15 05:40:50 crc kubenswrapper[4747]: I1215 05:40:50.502821 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16bd4ac3-acf8-400e-9413-fed487146d2f-utilities\") pod \"community-operators-dnrjw\" (UID: \"16bd4ac3-acf8-400e-9413-fed487146d2f\") " pod="openshift-marketplace/community-operators-dnrjw" Dec 15 05:40:50 crc kubenswrapper[4747]: I1215 05:40:50.604428 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxp7t\" (UniqueName: \"kubernetes.io/projected/16bd4ac3-acf8-400e-9413-fed487146d2f-kube-api-access-nxp7t\") pod \"community-operators-dnrjw\" (UID: \"16bd4ac3-acf8-400e-9413-fed487146d2f\") " pod="openshift-marketplace/community-operators-dnrjw" Dec 15 05:40:50 crc kubenswrapper[4747]: I1215 05:40:50.604548 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16bd4ac3-acf8-400e-9413-fed487146d2f-catalog-content\") pod \"community-operators-dnrjw\" (UID: \"16bd4ac3-acf8-400e-9413-fed487146d2f\") " pod="openshift-marketplace/community-operators-dnrjw" Dec 15 05:40:50 crc kubenswrapper[4747]: I1215 05:40:50.604584 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16bd4ac3-acf8-400e-9413-fed487146d2f-utilities\") pod \"community-operators-dnrjw\" (UID: \"16bd4ac3-acf8-400e-9413-fed487146d2f\") " pod="openshift-marketplace/community-operators-dnrjw" Dec 15 05:40:50 crc kubenswrapper[4747]: I1215 05:40:50.605038 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16bd4ac3-acf8-400e-9413-fed487146d2f-utilities\") pod \"community-operators-dnrjw\" (UID: \"16bd4ac3-acf8-400e-9413-fed487146d2f\") " pod="openshift-marketplace/community-operators-dnrjw" Dec 15 05:40:50 crc kubenswrapper[4747]: I1215 05:40:50.605127 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16bd4ac3-acf8-400e-9413-fed487146d2f-catalog-content\") pod \"community-operators-dnrjw\" (UID: \"16bd4ac3-acf8-400e-9413-fed487146d2f\") " pod="openshift-marketplace/community-operators-dnrjw" Dec 15 05:40:50 crc kubenswrapper[4747]: I1215 05:40:50.606262 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lbw8g" Dec 15 05:40:50 crc kubenswrapper[4747]: I1215 05:40:50.621326 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxp7t\" (UniqueName: \"kubernetes.io/projected/16bd4ac3-acf8-400e-9413-fed487146d2f-kube-api-access-nxp7t\") pod \"community-operators-dnrjw\" (UID: \"16bd4ac3-acf8-400e-9413-fed487146d2f\") " pod="openshift-marketplace/community-operators-dnrjw" Dec 15 05:40:50 crc kubenswrapper[4747]: I1215 05:40:50.759016 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dnrjw" Dec 15 05:40:50 crc kubenswrapper[4747]: I1215 05:40:50.778919 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lbw8g"] Dec 15 05:40:50 crc kubenswrapper[4747]: I1215 05:40:50.948365 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dnrjw"] Dec 15 05:40:51 crc kubenswrapper[4747]: W1215 05:40:51.020463 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16bd4ac3_acf8_400e_9413_fed487146d2f.slice/crio-594db38486737aee85a6bd5646f1754d7a8ca547d4e036a51ff6cef9ec5dda15 WatchSource:0}: Error finding container 594db38486737aee85a6bd5646f1754d7a8ca547d4e036a51ff6cef9ec5dda15: Status 404 returned error can't find the container with id 594db38486737aee85a6bd5646f1754d7a8ca547d4e036a51ff6cef9ec5dda15 Dec 15 05:40:51 crc kubenswrapper[4747]: I1215 05:40:51.409634 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vzrzt" event={"ID":"a54ad897-346d-40bf-8b62-df432709d572","Type":"ContainerStarted","Data":"572e2c0183796815ea4f7c1d5967a94f3659da70f9697e80e0b5a3b8ba3c549e"} Dec 15 05:40:51 crc kubenswrapper[4747]: I1215 05:40:51.412285 4747 generic.go:334] "Generic (PLEG): container finished" podID="16bd4ac3-acf8-400e-9413-fed487146d2f" containerID="231ff0798098ba5a62608c597b0071765506f8ebcb1a05074af743e652952836" exitCode=0 Dec 15 05:40:51 crc kubenswrapper[4747]: I1215 05:40:51.412369 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dnrjw" event={"ID":"16bd4ac3-acf8-400e-9413-fed487146d2f","Type":"ContainerDied","Data":"231ff0798098ba5a62608c597b0071765506f8ebcb1a05074af743e652952836"} Dec 15 05:40:51 crc kubenswrapper[4747]: I1215 05:40:51.412401 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dnrjw" event={"ID":"16bd4ac3-acf8-400e-9413-fed487146d2f","Type":"ContainerStarted","Data":"594db38486737aee85a6bd5646f1754d7a8ca547d4e036a51ff6cef9ec5dda15"} Dec 15 05:40:51 crc kubenswrapper[4747]: I1215 05:40:51.413983 4747 generic.go:334] "Generic (PLEG): container finished" podID="e2652452-9d91-4f09-9422-fa69bed43b9e" containerID="40cbe2dbe1aeac13e9c47fcdfd58f8255682a0befc49034457b88f66c5309c47" exitCode=0 Dec 15 05:40:51 crc kubenswrapper[4747]: I1215 05:40:51.414031 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lbw8g" event={"ID":"e2652452-9d91-4f09-9422-fa69bed43b9e","Type":"ContainerDied","Data":"40cbe2dbe1aeac13e9c47fcdfd58f8255682a0befc49034457b88f66c5309c47"} Dec 15 05:40:51 crc kubenswrapper[4747]: I1215 05:40:51.414049 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lbw8g" event={"ID":"e2652452-9d91-4f09-9422-fa69bed43b9e","Type":"ContainerStarted","Data":"9dbe880f5b2a2ecc482c02c8684d6b1327b29a0dc9e0873c7fd0302a01c5f1c9"} Dec 15 05:40:51 crc kubenswrapper[4747]: I1215 05:40:51.422441 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r755m" event={"ID":"efb301de-15d1-452a-b8e9-10296872545b","Type":"ContainerStarted","Data":"a8b63f6f74f596cac2eb49ff384f788e7105522c32affa23c1336e91a6099972"} Dec 15 05:40:51 crc kubenswrapper[4747]: I1215 05:40:51.437275 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vzrzt" podStartSLOduration=1.945510837 podStartE2EDuration="3.437263174s" podCreationTimestamp="2025-12-15 05:40:48 +0000 UTC" firstStartedPulling="2025-12-15 05:40:49.380560976 +0000 UTC m=+213.077072893" lastFinishedPulling="2025-12-15 05:40:50.872313313 +0000 UTC m=+214.568825230" observedRunningTime="2025-12-15 05:40:51.432960586 +0000 UTC m=+215.129472503" watchObservedRunningTime="2025-12-15 05:40:51.437263174 +0000 UTC m=+215.133775091" Dec 15 05:40:51 crc kubenswrapper[4747]: I1215 05:40:51.450337 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-r755m" podStartSLOduration=2.789000408 podStartE2EDuration="4.450327981s" podCreationTimestamp="2025-12-15 05:40:47 +0000 UTC" firstStartedPulling="2025-12-15 05:40:49.382907335 +0000 UTC m=+213.079419253" lastFinishedPulling="2025-12-15 05:40:51.04423491 +0000 UTC m=+214.740746826" observedRunningTime="2025-12-15 05:40:51.44749668 +0000 UTC m=+215.144008597" watchObservedRunningTime="2025-12-15 05:40:51.450327981 +0000 UTC m=+215.146839899" Dec 15 05:40:53 crc kubenswrapper[4747]: I1215 05:40:53.440981 4747 generic.go:334] "Generic (PLEG): container finished" podID="16bd4ac3-acf8-400e-9413-fed487146d2f" containerID="d97c261d87e60ecd5883136156239f704d84d93d305d133275aeef4dc90a3072" exitCode=0 Dec 15 05:40:53 crc kubenswrapper[4747]: I1215 05:40:53.441212 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dnrjw" event={"ID":"16bd4ac3-acf8-400e-9413-fed487146d2f","Type":"ContainerDied","Data":"d97c261d87e60ecd5883136156239f704d84d93d305d133275aeef4dc90a3072"} Dec 15 05:40:53 crc kubenswrapper[4747]: I1215 05:40:53.450246 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lbw8g" event={"ID":"e2652452-9d91-4f09-9422-fa69bed43b9e","Type":"ContainerStarted","Data":"a2a1cc194903ace032d50dadfca6d73f54376ee91fed7e1963505138c160bc61"} Dec 15 05:40:54 crc kubenswrapper[4747]: I1215 05:40:54.459133 4747 generic.go:334] "Generic (PLEG): container finished" podID="e2652452-9d91-4f09-9422-fa69bed43b9e" containerID="a2a1cc194903ace032d50dadfca6d73f54376ee91fed7e1963505138c160bc61" exitCode=0 Dec 15 05:40:54 crc kubenswrapper[4747]: I1215 05:40:54.459242 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lbw8g" event={"ID":"e2652452-9d91-4f09-9422-fa69bed43b9e","Type":"ContainerDied","Data":"a2a1cc194903ace032d50dadfca6d73f54376ee91fed7e1963505138c160bc61"} Dec 15 05:40:54 crc kubenswrapper[4747]: I1215 05:40:54.462099 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dnrjw" event={"ID":"16bd4ac3-acf8-400e-9413-fed487146d2f","Type":"ContainerStarted","Data":"086e7446a0dac6252dcc060a943156ee0f74f157139d4cd7d4baa32615360fee"} Dec 15 05:40:54 crc kubenswrapper[4747]: I1215 05:40:54.496041 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dnrjw" podStartSLOduration=1.8299100830000001 podStartE2EDuration="4.496017674s" podCreationTimestamp="2025-12-15 05:40:50 +0000 UTC" firstStartedPulling="2025-12-15 05:40:51.413456706 +0000 UTC m=+215.109968623" lastFinishedPulling="2025-12-15 05:40:54.079564298 +0000 UTC m=+217.776076214" observedRunningTime="2025-12-15 05:40:54.493215496 +0000 UTC m=+218.189727413" watchObservedRunningTime="2025-12-15 05:40:54.496017674 +0000 UTC m=+218.192529591" Dec 15 05:40:55 crc kubenswrapper[4747]: I1215 05:40:55.469894 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lbw8g" event={"ID":"e2652452-9d91-4f09-9422-fa69bed43b9e","Type":"ContainerStarted","Data":"d4e88e5e9ea2e8514de2e88851b87145d6671277a37e8f44531599095494c8aa"} Dec 15 05:40:55 crc kubenswrapper[4747]: I1215 05:40:55.491160 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lbw8g" podStartSLOduration=1.865816294 podStartE2EDuration="5.491142654s" podCreationTimestamp="2025-12-15 05:40:50 +0000 UTC" firstStartedPulling="2025-12-15 05:40:51.415494918 +0000 UTC m=+215.112006834" lastFinishedPulling="2025-12-15 05:40:55.040821276 +0000 UTC m=+218.737333194" observedRunningTime="2025-12-15 05:40:55.48786845 +0000 UTC m=+219.184380366" watchObservedRunningTime="2025-12-15 05:40:55.491142654 +0000 UTC m=+219.187654561" Dec 15 05:40:58 crc kubenswrapper[4747]: I1215 05:40:58.160912 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-r755m" Dec 15 05:40:58 crc kubenswrapper[4747]: I1215 05:40:58.161299 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-r755m" Dec 15 05:40:58 crc kubenswrapper[4747]: I1215 05:40:58.201580 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-r755m" Dec 15 05:40:58 crc kubenswrapper[4747]: I1215 05:40:58.358147 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vzrzt" Dec 15 05:40:58 crc kubenswrapper[4747]: I1215 05:40:58.358189 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vzrzt" Dec 15 05:40:58 crc kubenswrapper[4747]: I1215 05:40:58.390846 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vzrzt" Dec 15 05:40:58 crc kubenswrapper[4747]: I1215 05:40:58.517406 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vzrzt" Dec 15 05:40:58 crc kubenswrapper[4747]: I1215 05:40:58.519065 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-r755m" Dec 15 05:40:58 crc kubenswrapper[4747]: I1215 05:40:58.865547 4747 patch_prober.go:28] interesting pod/machine-config-daemon-nldtn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 15 05:40:58 crc kubenswrapper[4747]: I1215 05:40:58.865608 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 15 05:40:58 crc kubenswrapper[4747]: I1215 05:40:58.865674 4747 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" Dec 15 05:40:58 crc kubenswrapper[4747]: I1215 05:40:58.866133 4747 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d8e211d6177b24044383a0cc22cceb80f6442489a5d2b4adbafc3d36637b3a96"} pod="openshift-machine-config-operator/machine-config-daemon-nldtn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 15 05:40:58 crc kubenswrapper[4747]: I1215 05:40:58.866193 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" containerID="cri-o://d8e211d6177b24044383a0cc22cceb80f6442489a5d2b4adbafc3d36637b3a96" gracePeriod=600 Dec 15 05:41:00 crc kubenswrapper[4747]: I1215 05:41:00.607246 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lbw8g" Dec 15 05:41:00 crc kubenswrapper[4747]: I1215 05:41:00.607714 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lbw8g" Dec 15 05:41:00 crc kubenswrapper[4747]: I1215 05:41:00.652972 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lbw8g" Dec 15 05:41:00 crc kubenswrapper[4747]: I1215 05:41:00.759242 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dnrjw" Dec 15 05:41:00 crc kubenswrapper[4747]: I1215 05:41:00.759295 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dnrjw" Dec 15 05:41:00 crc kubenswrapper[4747]: I1215 05:41:00.790367 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dnrjw" Dec 15 05:41:01 crc kubenswrapper[4747]: I1215 05:41:01.502913 4747 generic.go:334] "Generic (PLEG): container finished" podID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerID="d8e211d6177b24044383a0cc22cceb80f6442489a5d2b4adbafc3d36637b3a96" exitCode=0 Dec 15 05:41:01 crc kubenswrapper[4747]: I1215 05:41:01.502994 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" event={"ID":"1d50e5c9-7ce9-40c0-b942-01031654d27c","Type":"ContainerDied","Data":"d8e211d6177b24044383a0cc22cceb80f6442489a5d2b4adbafc3d36637b3a96"} Dec 15 05:41:01 crc kubenswrapper[4747]: I1215 05:41:01.535140 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lbw8g" Dec 15 05:41:01 crc kubenswrapper[4747]: I1215 05:41:01.536140 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dnrjw" Dec 15 05:41:02 crc kubenswrapper[4747]: I1215 05:41:02.511316 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" event={"ID":"1d50e5c9-7ce9-40c0-b942-01031654d27c","Type":"ContainerStarted","Data":"69403043616ef8b443997fe2ec8a367f1ef1de28024e4cb945e644c4878527e7"} Dec 15 05:41:03 crc kubenswrapper[4747]: I1215 05:41:03.479790 4747 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 15 05:41:03 crc kubenswrapper[4747]: I1215 05:41:03.480077 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4" gracePeriod=15 Dec 15 05:41:03 crc kubenswrapper[4747]: I1215 05:41:03.480127 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://7c6f0ff77c32912e767410211079dffc4e0681cf67e040e2cff227825813908d" gracePeriod=15 Dec 15 05:41:03 crc kubenswrapper[4747]: I1215 05:41:03.480170 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16" gracePeriod=15 Dec 15 05:41:03 crc kubenswrapper[4747]: I1215 05:41:03.480195 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f" gracePeriod=15 Dec 15 05:41:03 crc kubenswrapper[4747]: I1215 05:41:03.480149 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0" gracePeriod=15 Dec 15 05:41:03 crc kubenswrapper[4747]: I1215 05:41:03.481851 4747 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 15 05:41:03 crc kubenswrapper[4747]: E1215 05:41:03.482213 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 15 05:41:03 crc kubenswrapper[4747]: I1215 05:41:03.482288 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 15 05:41:03 crc kubenswrapper[4747]: E1215 05:41:03.482349 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 15 05:41:03 crc kubenswrapper[4747]: I1215 05:41:03.482406 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 15 05:41:03 crc kubenswrapper[4747]: E1215 05:41:03.482461 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 15 05:41:03 crc kubenswrapper[4747]: I1215 05:41:03.482516 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 15 05:41:03 crc kubenswrapper[4747]: E1215 05:41:03.482583 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 15 05:41:03 crc kubenswrapper[4747]: I1215 05:41:03.482649 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 15 05:41:03 crc kubenswrapper[4747]: E1215 05:41:03.482706 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 15 05:41:03 crc kubenswrapper[4747]: I1215 05:41:03.482758 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 15 05:41:03 crc kubenswrapper[4747]: E1215 05:41:03.482809 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 15 05:41:03 crc kubenswrapper[4747]: I1215 05:41:03.482859 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 15 05:41:03 crc kubenswrapper[4747]: E1215 05:41:03.482914 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 15 05:41:03 crc kubenswrapper[4747]: I1215 05:41:03.482993 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 15 05:41:03 crc kubenswrapper[4747]: I1215 05:41:03.483134 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 15 05:41:03 crc kubenswrapper[4747]: I1215 05:41:03.483192 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 15 05:41:03 crc kubenswrapper[4747]: I1215 05:41:03.483250 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 15 05:41:03 crc kubenswrapper[4747]: I1215 05:41:03.483304 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 15 05:41:03 crc kubenswrapper[4747]: I1215 05:41:03.483356 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 15 05:41:03 crc kubenswrapper[4747]: I1215 05:41:03.483559 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 15 05:41:03 crc kubenswrapper[4747]: I1215 05:41:03.484727 4747 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 15 05:41:03 crc kubenswrapper[4747]: I1215 05:41:03.485243 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 15 05:41:03 crc kubenswrapper[4747]: I1215 05:41:03.492017 4747 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Dec 15 05:41:03 crc kubenswrapper[4747]: I1215 05:41:03.516601 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 15 05:41:03 crc kubenswrapper[4747]: I1215 05:41:03.558847 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 15 05:41:03 crc kubenswrapper[4747]: I1215 05:41:03.558915 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 15 05:41:03 crc kubenswrapper[4747]: I1215 05:41:03.558953 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 15 05:41:03 crc kubenswrapper[4747]: I1215 05:41:03.558981 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 15 05:41:03 crc kubenswrapper[4747]: I1215 05:41:03.559017 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 15 05:41:03 crc kubenswrapper[4747]: I1215 05:41:03.559039 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 15 05:41:03 crc kubenswrapper[4747]: I1215 05:41:03.559068 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 15 05:41:03 crc kubenswrapper[4747]: I1215 05:41:03.559098 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 15 05:41:03 crc kubenswrapper[4747]: I1215 05:41:03.660104 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 15 05:41:03 crc kubenswrapper[4747]: I1215 05:41:03.660448 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 15 05:41:03 crc kubenswrapper[4747]: I1215 05:41:03.660486 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 15 05:41:03 crc kubenswrapper[4747]: I1215 05:41:03.660237 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 15 05:41:03 crc kubenswrapper[4747]: I1215 05:41:03.660521 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 15 05:41:03 crc kubenswrapper[4747]: I1215 05:41:03.660623 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 15 05:41:03 crc kubenswrapper[4747]: I1215 05:41:03.660637 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 15 05:41:03 crc kubenswrapper[4747]: I1215 05:41:03.660664 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 15 05:41:03 crc kubenswrapper[4747]: I1215 05:41:03.660773 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 15 05:41:03 crc kubenswrapper[4747]: I1215 05:41:03.660845 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 15 05:41:03 crc kubenswrapper[4747]: I1215 05:41:03.660872 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 15 05:41:03 crc kubenswrapper[4747]: I1215 05:41:03.660893 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 15 05:41:03 crc kubenswrapper[4747]: I1215 05:41:03.661014 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 15 05:41:03 crc kubenswrapper[4747]: I1215 05:41:03.661068 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 15 05:41:03 crc kubenswrapper[4747]: I1215 05:41:03.661072 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 15 05:41:03 crc kubenswrapper[4747]: I1215 05:41:03.661099 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 15 05:41:03 crc kubenswrapper[4747]: I1215 05:41:03.813000 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 15 05:41:03 crc kubenswrapper[4747]: W1215 05:41:03.834163 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-1a58a8925614235e500543a8fba19bc22b8ec291b6dbd4686fc1208241e985c2 WatchSource:0}: Error finding container 1a58a8925614235e500543a8fba19bc22b8ec291b6dbd4686fc1208241e985c2: Status 404 returned error can't find the container with id 1a58a8925614235e500543a8fba19bc22b8ec291b6dbd4686fc1208241e985c2 Dec 15 05:41:03 crc kubenswrapper[4747]: E1215 05:41:03.837602 4747 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.25.116:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18814d10e3709aea openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-15 05:41:03.836568298 +0000 UTC m=+227.533080215,LastTimestamp:2025-12-15 05:41:03.836568298 +0000 UTC m=+227.533080215,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 15 05:41:04 crc kubenswrapper[4747]: I1215 05:41:04.524068 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"e07df00ce4f7fbe290c00e9569fa660afea17327866ebdadbe609ffb309949a2"} Dec 15 05:41:04 crc kubenswrapper[4747]: I1215 05:41:04.525005 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"1a58a8925614235e500543a8fba19bc22b8ec291b6dbd4686fc1208241e985c2"} Dec 15 05:41:04 crc kubenswrapper[4747]: I1215 05:41:04.524520 4747 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.116:6443: connect: connection refused" Dec 15 05:41:04 crc kubenswrapper[4747]: I1215 05:41:04.526632 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 15 05:41:04 crc kubenswrapper[4747]: I1215 05:41:04.527816 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 15 05:41:04 crc kubenswrapper[4747]: I1215 05:41:04.528559 4747 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7c6f0ff77c32912e767410211079dffc4e0681cf67e040e2cff227825813908d" exitCode=0 Dec 15 05:41:04 crc kubenswrapper[4747]: I1215 05:41:04.528602 4747 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0" exitCode=0 Dec 15 05:41:04 crc kubenswrapper[4747]: I1215 05:41:04.528619 4747 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16" exitCode=0 Dec 15 05:41:04 crc kubenswrapper[4747]: I1215 05:41:04.528629 4747 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f" exitCode=2 Dec 15 05:41:04 crc kubenswrapper[4747]: I1215 05:41:04.528688 4747 scope.go:117] "RemoveContainer" containerID="675507d9779ef73f4e25201102562a79a43654bce4a6e3b363f03e0e0b697578" Dec 15 05:41:04 crc kubenswrapper[4747]: I1215 05:41:04.530225 4747 generic.go:334] "Generic (PLEG): container finished" podID="2ccdb417-62a2-4f3a-8b63-742cfee41cde" containerID="e3fcf7d329060c7efe9fa909dd12bc96c5a2d1261d5886ac9f0a5716e4e15ee0" exitCode=0 Dec 15 05:41:04 crc kubenswrapper[4747]: I1215 05:41:04.530258 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2ccdb417-62a2-4f3a-8b63-742cfee41cde","Type":"ContainerDied","Data":"e3fcf7d329060c7efe9fa909dd12bc96c5a2d1261d5886ac9f0a5716e4e15ee0"} Dec 15 05:41:04 crc kubenswrapper[4747]: I1215 05:41:04.530955 4747 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.116:6443: connect: connection refused" Dec 15 05:41:04 crc kubenswrapper[4747]: I1215 05:41:04.531406 4747 status_manager.go:851] "Failed to get status for pod" podUID="2ccdb417-62a2-4f3a-8b63-742cfee41cde" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.116:6443: connect: connection refused" Dec 15 05:41:05 crc kubenswrapper[4747]: I1215 05:41:05.544690 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 15 05:41:05 crc kubenswrapper[4747]: I1215 05:41:05.830311 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 15 05:41:05 crc kubenswrapper[4747]: I1215 05:41:05.831172 4747 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.116:6443: connect: connection refused" Dec 15 05:41:05 crc kubenswrapper[4747]: I1215 05:41:05.831440 4747 status_manager.go:851] "Failed to get status for pod" podUID="2ccdb417-62a2-4f3a-8b63-742cfee41cde" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.116:6443: connect: connection refused" Dec 15 05:41:05 crc kubenswrapper[4747]: I1215 05:41:05.836405 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 15 05:41:05 crc kubenswrapper[4747]: I1215 05:41:05.837341 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 15 05:41:05 crc kubenswrapper[4747]: I1215 05:41:05.837774 4747 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.116:6443: connect: connection refused" Dec 15 05:41:05 crc kubenswrapper[4747]: I1215 05:41:05.838009 4747 status_manager.go:851] "Failed to get status for pod" podUID="2ccdb417-62a2-4f3a-8b63-742cfee41cde" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.116:6443: connect: connection refused" Dec 15 05:41:05 crc kubenswrapper[4747]: I1215 05:41:05.838240 4747 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.25.116:6443: connect: connection refused" Dec 15 05:41:05 crc kubenswrapper[4747]: I1215 05:41:05.988443 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2ccdb417-62a2-4f3a-8b63-742cfee41cde-kubelet-dir\") pod \"2ccdb417-62a2-4f3a-8b63-742cfee41cde\" (UID: \"2ccdb417-62a2-4f3a-8b63-742cfee41cde\") " Dec 15 05:41:05 crc kubenswrapper[4747]: I1215 05:41:05.988489 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 15 05:41:05 crc kubenswrapper[4747]: I1215 05:41:05.988510 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ccdb417-62a2-4f3a-8b63-742cfee41cde-kube-api-access\") pod \"2ccdb417-62a2-4f3a-8b63-742cfee41cde\" (UID: \"2ccdb417-62a2-4f3a-8b63-742cfee41cde\") " Dec 15 05:41:05 crc kubenswrapper[4747]: I1215 05:41:05.988565 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2ccdb417-62a2-4f3a-8b63-742cfee41cde-var-lock\") pod \"2ccdb417-62a2-4f3a-8b63-742cfee41cde\" (UID: \"2ccdb417-62a2-4f3a-8b63-742cfee41cde\") " Dec 15 05:41:05 crc kubenswrapper[4747]: I1215 05:41:05.988591 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 15 05:41:05 crc kubenswrapper[4747]: I1215 05:41:05.988601 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 15 05:41:05 crc kubenswrapper[4747]: I1215 05:41:05.988634 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 15 05:41:05 crc kubenswrapper[4747]: I1215 05:41:05.988596 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ccdb417-62a2-4f3a-8b63-742cfee41cde-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2ccdb417-62a2-4f3a-8b63-742cfee41cde" (UID: "2ccdb417-62a2-4f3a-8b63-742cfee41cde"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 15 05:41:05 crc kubenswrapper[4747]: I1215 05:41:05.988658 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ccdb417-62a2-4f3a-8b63-742cfee41cde-var-lock" (OuterVolumeSpecName: "var-lock") pod "2ccdb417-62a2-4f3a-8b63-742cfee41cde" (UID: "2ccdb417-62a2-4f3a-8b63-742cfee41cde"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 15 05:41:05 crc kubenswrapper[4747]: I1215 05:41:05.988689 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 15 05:41:05 crc kubenswrapper[4747]: I1215 05:41:05.988775 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 15 05:41:05 crc kubenswrapper[4747]: I1215 05:41:05.989039 4747 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2ccdb417-62a2-4f3a-8b63-742cfee41cde-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 15 05:41:05 crc kubenswrapper[4747]: I1215 05:41:05.989055 4747 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 15 05:41:05 crc kubenswrapper[4747]: I1215 05:41:05.989065 4747 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2ccdb417-62a2-4f3a-8b63-742cfee41cde-var-lock\") on node \"crc\" DevicePath \"\"" Dec 15 05:41:05 crc kubenswrapper[4747]: I1215 05:41:05.989075 4747 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 15 05:41:05 crc kubenswrapper[4747]: I1215 05:41:05.989084 4747 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 15 05:41:05 crc kubenswrapper[4747]: I1215 05:41:05.994323 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ccdb417-62a2-4f3a-8b63-742cfee41cde-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2ccdb417-62a2-4f3a-8b63-742cfee41cde" (UID: "2ccdb417-62a2-4f3a-8b63-742cfee41cde"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:41:06 crc kubenswrapper[4747]: I1215 05:41:06.090101 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ccdb417-62a2-4f3a-8b63-742cfee41cde-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 15 05:41:06 crc kubenswrapper[4747]: I1215 05:41:06.561379 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 15 05:41:06 crc kubenswrapper[4747]: I1215 05:41:06.563496 4747 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4" exitCode=0 Dec 15 05:41:06 crc kubenswrapper[4747]: I1215 05:41:06.563656 4747 scope.go:117] "RemoveContainer" containerID="7c6f0ff77c32912e767410211079dffc4e0681cf67e040e2cff227825813908d" Dec 15 05:41:06 crc kubenswrapper[4747]: I1215 05:41:06.563665 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 15 05:41:06 crc kubenswrapper[4747]: I1215 05:41:06.565698 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2ccdb417-62a2-4f3a-8b63-742cfee41cde","Type":"ContainerDied","Data":"5be19ea9260fc3a20fdc82ec1ccb2941c1cb599ad5b4e4a3f3e1ea3baa898f3e"} Dec 15 05:41:06 crc kubenswrapper[4747]: I1215 05:41:06.565734 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 15 05:41:06 crc kubenswrapper[4747]: I1215 05:41:06.565738 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5be19ea9260fc3a20fdc82ec1ccb2941c1cb599ad5b4e4a3f3e1ea3baa898f3e" Dec 15 05:41:06 crc kubenswrapper[4747]: I1215 05:41:06.582015 4747 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.116:6443: connect: connection refused" Dec 15 05:41:06 crc kubenswrapper[4747]: I1215 05:41:06.582373 4747 status_manager.go:851] "Failed to get status for pod" podUID="2ccdb417-62a2-4f3a-8b63-742cfee41cde" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.116:6443: connect: connection refused" Dec 15 05:41:06 crc kubenswrapper[4747]: I1215 05:41:06.582757 4747 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.25.116:6443: connect: connection refused" Dec 15 05:41:06 crc kubenswrapper[4747]: I1215 05:41:06.583431 4747 status_manager.go:851] "Failed to get status for pod" podUID="2ccdb417-62a2-4f3a-8b63-742cfee41cde" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.116:6443: connect: connection refused" Dec 15 05:41:06 crc kubenswrapper[4747]: I1215 05:41:06.583673 4747 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.25.116:6443: connect: connection refused" Dec 15 05:41:06 crc kubenswrapper[4747]: I1215 05:41:06.583908 4747 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.116:6443: connect: connection refused" Dec 15 05:41:06 crc kubenswrapper[4747]: I1215 05:41:06.585702 4747 scope.go:117] "RemoveContainer" containerID="c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0" Dec 15 05:41:06 crc kubenswrapper[4747]: I1215 05:41:06.601492 4747 scope.go:117] "RemoveContainer" containerID="cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16" Dec 15 05:41:06 crc kubenswrapper[4747]: I1215 05:41:06.613786 4747 scope.go:117] "RemoveContainer" containerID="e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f" Dec 15 05:41:06 crc kubenswrapper[4747]: I1215 05:41:06.626994 4747 scope.go:117] "RemoveContainer" containerID="e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4" Dec 15 05:41:06 crc kubenswrapper[4747]: I1215 05:41:06.632817 4747 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.116:6443: connect: connection refused" Dec 15 05:41:06 crc kubenswrapper[4747]: I1215 05:41:06.633106 4747 status_manager.go:851] "Failed to get status for pod" podUID="2ccdb417-62a2-4f3a-8b63-742cfee41cde" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.116:6443: connect: connection refused" Dec 15 05:41:06 crc kubenswrapper[4747]: I1215 05:41:06.633394 4747 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.25.116:6443: connect: connection refused" Dec 15 05:41:06 crc kubenswrapper[4747]: I1215 05:41:06.638784 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 15 05:41:06 crc kubenswrapper[4747]: I1215 05:41:06.643335 4747 scope.go:117] "RemoveContainer" containerID="73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a" Dec 15 05:41:06 crc kubenswrapper[4747]: I1215 05:41:06.666272 4747 scope.go:117] "RemoveContainer" containerID="7c6f0ff77c32912e767410211079dffc4e0681cf67e040e2cff227825813908d" Dec 15 05:41:06 crc kubenswrapper[4747]: E1215 05:41:06.666724 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c6f0ff77c32912e767410211079dffc4e0681cf67e040e2cff227825813908d\": container with ID starting with 7c6f0ff77c32912e767410211079dffc4e0681cf67e040e2cff227825813908d not found: ID does not exist" containerID="7c6f0ff77c32912e767410211079dffc4e0681cf67e040e2cff227825813908d" Dec 15 05:41:06 crc kubenswrapper[4747]: I1215 05:41:06.666765 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c6f0ff77c32912e767410211079dffc4e0681cf67e040e2cff227825813908d"} err="failed to get container status \"7c6f0ff77c32912e767410211079dffc4e0681cf67e040e2cff227825813908d\": rpc error: code = NotFound desc = could not find container \"7c6f0ff77c32912e767410211079dffc4e0681cf67e040e2cff227825813908d\": container with ID starting with 7c6f0ff77c32912e767410211079dffc4e0681cf67e040e2cff227825813908d not found: ID does not exist" Dec 15 05:41:06 crc kubenswrapper[4747]: I1215 05:41:06.666799 4747 scope.go:117] "RemoveContainer" containerID="c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0" Dec 15 05:41:06 crc kubenswrapper[4747]: E1215 05:41:06.667239 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0\": container with ID starting with c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0 not found: ID does not exist" containerID="c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0" Dec 15 05:41:06 crc kubenswrapper[4747]: I1215 05:41:06.667282 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0"} err="failed to get container status \"c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0\": rpc error: code = NotFound desc = could not find container \"c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0\": container with ID starting with c1eb44a2e19ac25028d439c6713ac53ffd632569c254dc95ce50afbd069bfce0 not found: ID does not exist" Dec 15 05:41:06 crc kubenswrapper[4747]: I1215 05:41:06.667321 4747 scope.go:117] "RemoveContainer" containerID="cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16" Dec 15 05:41:06 crc kubenswrapper[4747]: E1215 05:41:06.667691 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16\": container with ID starting with cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16 not found: ID does not exist" containerID="cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16" Dec 15 05:41:06 crc kubenswrapper[4747]: I1215 05:41:06.667724 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16"} err="failed to get container status \"cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16\": rpc error: code = NotFound desc = could not find container \"cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16\": container with ID starting with cbff1352c532c33480e95e615398bd633f1c7e90182b0d8644f5aca64b252d16 not found: ID does not exist" Dec 15 05:41:06 crc kubenswrapper[4747]: I1215 05:41:06.667744 4747 scope.go:117] "RemoveContainer" containerID="e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f" Dec 15 05:41:06 crc kubenswrapper[4747]: E1215 05:41:06.668124 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f\": container with ID starting with e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f not found: ID does not exist" containerID="e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f" Dec 15 05:41:06 crc kubenswrapper[4747]: I1215 05:41:06.668157 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f"} err="failed to get container status \"e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f\": rpc error: code = NotFound desc = could not find container \"e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f\": container with ID starting with e924d41a2bc9345d633f3fda9ff20e21513974c05fcd8bde6a48985110eb654f not found: ID does not exist" Dec 15 05:41:06 crc kubenswrapper[4747]: I1215 05:41:06.668179 4747 scope.go:117] "RemoveContainer" containerID="e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4" Dec 15 05:41:06 crc kubenswrapper[4747]: E1215 05:41:06.668528 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4\": container with ID starting with e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4 not found: ID does not exist" containerID="e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4" Dec 15 05:41:06 crc kubenswrapper[4747]: I1215 05:41:06.668560 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4"} err="failed to get container status \"e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4\": rpc error: code = NotFound desc = could not find container \"e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4\": container with ID starting with e5f418948c7093761a372022136583bd6b268fd487b0a3923c126644d2673ba4 not found: ID does not exist" Dec 15 05:41:06 crc kubenswrapper[4747]: I1215 05:41:06.668578 4747 scope.go:117] "RemoveContainer" containerID="73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a" Dec 15 05:41:06 crc kubenswrapper[4747]: E1215 05:41:06.668880 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\": container with ID starting with 73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a not found: ID does not exist" containerID="73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a" Dec 15 05:41:06 crc kubenswrapper[4747]: I1215 05:41:06.668911 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a"} err="failed to get container status \"73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\": rpc error: code = NotFound desc = could not find container \"73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a\": container with ID starting with 73d806971dcd1cb9295ec4bfa478fcc748e99f0dbbe69f492bf3d18e790db95a not found: ID does not exist" Dec 15 05:41:10 crc kubenswrapper[4747]: E1215 05:41:10.122424 4747 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.25.116:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18814d10e3709aea openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-15 05:41:03.836568298 +0000 UTC m=+227.533080215,LastTimestamp:2025-12-15 05:41:03.836568298 +0000 UTC m=+227.533080215,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 15 05:41:11 crc kubenswrapper[4747]: E1215 05:41:11.288996 4747 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.116:6443: connect: connection refused" Dec 15 05:41:11 crc kubenswrapper[4747]: E1215 05:41:11.290284 4747 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.116:6443: connect: connection refused" Dec 15 05:41:11 crc kubenswrapper[4747]: E1215 05:41:11.290818 4747 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.116:6443: connect: connection refused" Dec 15 05:41:11 crc kubenswrapper[4747]: E1215 05:41:11.291174 4747 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.116:6443: connect: connection refused" Dec 15 05:41:11 crc kubenswrapper[4747]: E1215 05:41:11.291640 4747 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.116:6443: connect: connection refused" Dec 15 05:41:11 crc kubenswrapper[4747]: I1215 05:41:11.291697 4747 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 15 05:41:11 crc kubenswrapper[4747]: E1215 05:41:11.292093 4747 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.116:6443: connect: connection refused" interval="200ms" Dec 15 05:41:11 crc kubenswrapper[4747]: E1215 05:41:11.493125 4747 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.116:6443: connect: connection refused" interval="400ms" Dec 15 05:41:11 crc kubenswrapper[4747]: E1215 05:41:11.893849 4747 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.116:6443: connect: connection refused" interval="800ms" Dec 15 05:41:12 crc kubenswrapper[4747]: E1215 05:41:12.694920 4747 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.116:6443: connect: connection refused" interval="1.6s" Dec 15 05:41:14 crc kubenswrapper[4747]: E1215 05:41:14.295557 4747 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.116:6443: connect: connection refused" interval="3.2s" Dec 15 05:41:14 crc kubenswrapper[4747]: I1215 05:41:14.629236 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 15 05:41:14 crc kubenswrapper[4747]: I1215 05:41:14.629971 4747 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.116:6443: connect: connection refused" Dec 15 05:41:14 crc kubenswrapper[4747]: I1215 05:41:14.630331 4747 status_manager.go:851] "Failed to get status for pod" podUID="2ccdb417-62a2-4f3a-8b63-742cfee41cde" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.116:6443: connect: connection refused" Dec 15 05:41:14 crc kubenswrapper[4747]: I1215 05:41:14.642196 4747 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="31db1d28-81c8-4eae-989c-49168bd4e711" Dec 15 05:41:14 crc kubenswrapper[4747]: I1215 05:41:14.642223 4747 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="31db1d28-81c8-4eae-989c-49168bd4e711" Dec 15 05:41:14 crc kubenswrapper[4747]: E1215 05:41:14.642691 4747 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.25.116:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 15 05:41:14 crc kubenswrapper[4747]: I1215 05:41:14.643787 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 15 05:41:14 crc kubenswrapper[4747]: W1215 05:41:14.664368 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-bd0fbf596d7f9532f0d1f8d34a573d6c027623094b1c8f8f972562e7b5dbc858 WatchSource:0}: Error finding container bd0fbf596d7f9532f0d1f8d34a573d6c027623094b1c8f8f972562e7b5dbc858: Status 404 returned error can't find the container with id bd0fbf596d7f9532f0d1f8d34a573d6c027623094b1c8f8f972562e7b5dbc858 Dec 15 05:41:15 crc kubenswrapper[4747]: I1215 05:41:15.619421 4747 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="d386b82368b71c1ca1a0e8c6280ceea4895b8b5b3ef3b7fe4b7811c84d7a32f4" exitCode=0 Dec 15 05:41:15 crc kubenswrapper[4747]: I1215 05:41:15.619499 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"d386b82368b71c1ca1a0e8c6280ceea4895b8b5b3ef3b7fe4b7811c84d7a32f4"} Dec 15 05:41:15 crc kubenswrapper[4747]: I1215 05:41:15.619806 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"bd0fbf596d7f9532f0d1f8d34a573d6c027623094b1c8f8f972562e7b5dbc858"} Dec 15 05:41:15 crc kubenswrapper[4747]: I1215 05:41:15.620162 4747 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="31db1d28-81c8-4eae-989c-49168bd4e711" Dec 15 05:41:15 crc kubenswrapper[4747]: I1215 05:41:15.620182 4747 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="31db1d28-81c8-4eae-989c-49168bd4e711" Dec 15 05:41:15 crc kubenswrapper[4747]: I1215 05:41:15.620575 4747 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.116:6443: connect: connection refused" Dec 15 05:41:15 crc kubenswrapper[4747]: E1215 05:41:15.620708 4747 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.25.116:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 15 05:41:15 crc kubenswrapper[4747]: I1215 05:41:15.620977 4747 status_manager.go:851] "Failed to get status for pod" podUID="2ccdb417-62a2-4f3a-8b63-742cfee41cde" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.116:6443: connect: connection refused" Dec 15 05:41:16 crc kubenswrapper[4747]: I1215 05:41:16.515662 4747 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 15 05:41:16 crc kubenswrapper[4747]: I1215 05:41:16.516146 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 15 05:41:16 crc kubenswrapper[4747]: I1215 05:41:16.635440 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 15 05:41:16 crc kubenswrapper[4747]: I1215 05:41:16.636241 4747 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="9fe7a2e5d3edede6651bef4bed5fb61df5e777cb2da607c135c9e529ad6d58fd" exitCode=1 Dec 15 05:41:16 crc kubenswrapper[4747]: I1215 05:41:16.636135 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"92044ead6cb96b2e2152c54b75a892bf784d831cb41bf7ee017ede51283f69bd"} Dec 15 05:41:16 crc kubenswrapper[4747]: I1215 05:41:16.636427 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"838e57bd4c6cfe9d6fab908c2f10302a53fc281a210a6e89a5734a77530952e7"} Dec 15 05:41:16 crc kubenswrapper[4747]: I1215 05:41:16.636525 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8d0ec2ccfe82998bf512e3230a54d169dee506ca2e7852545f075cf11f60c4a4"} Dec 15 05:41:16 crc kubenswrapper[4747]: I1215 05:41:16.636594 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a1111b6191ca1518139de8fc41bdcaff984d413e12aae6e14b038fa54e1d7d17"} Dec 15 05:41:16 crc kubenswrapper[4747]: I1215 05:41:16.636674 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cde7a5632c44a8a273778b24898b211eeb7d8dd7614e068130d61739e1329b1d"} Dec 15 05:41:16 crc kubenswrapper[4747]: I1215 05:41:16.636743 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"9fe7a2e5d3edede6651bef4bed5fb61df5e777cb2da607c135c9e529ad6d58fd"} Dec 15 05:41:16 crc kubenswrapper[4747]: I1215 05:41:16.637000 4747 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="31db1d28-81c8-4eae-989c-49168bd4e711" Dec 15 05:41:16 crc kubenswrapper[4747]: I1215 05:41:16.637040 4747 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="31db1d28-81c8-4eae-989c-49168bd4e711" Dec 15 05:41:16 crc kubenswrapper[4747]: I1215 05:41:16.637433 4747 scope.go:117] "RemoveContainer" containerID="9fe7a2e5d3edede6651bef4bed5fb61df5e777cb2da607c135c9e529ad6d58fd" Dec 15 05:41:16 crc kubenswrapper[4747]: I1215 05:41:16.637471 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 15 05:41:17 crc kubenswrapper[4747]: I1215 05:41:17.644437 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 15 05:41:17 crc kubenswrapper[4747]: I1215 05:41:17.644772 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8f441a7aee6089100e90457d6cc561c0a0d9413bec86b7751858f3cff7583299"} Dec 15 05:41:19 crc kubenswrapper[4747]: I1215 05:41:19.644095 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 15 05:41:19 crc kubenswrapper[4747]: I1215 05:41:19.644432 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 15 05:41:19 crc kubenswrapper[4747]: I1215 05:41:19.648819 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 15 05:41:22 crc kubenswrapper[4747]: I1215 05:41:22.168723 4747 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 15 05:41:22 crc kubenswrapper[4747]: I1215 05:41:22.197972 4747 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="248bf1c8-8c16-48f8-8f87-b6bd8c3428cb" Dec 15 05:41:22 crc kubenswrapper[4747]: I1215 05:41:22.669679 4747 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="31db1d28-81c8-4eae-989c-49168bd4e711" Dec 15 05:41:22 crc kubenswrapper[4747]: I1215 05:41:22.669715 4747 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="31db1d28-81c8-4eae-989c-49168bd4e711" Dec 15 05:41:22 crc kubenswrapper[4747]: I1215 05:41:22.671906 4747 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="248bf1c8-8c16-48f8-8f87-b6bd8c3428cb" Dec 15 05:41:22 crc kubenswrapper[4747]: I1215 05:41:22.964943 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 15 05:41:26 crc kubenswrapper[4747]: I1215 05:41:26.618486 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 15 05:41:26 crc kubenswrapper[4747]: I1215 05:41:26.622832 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 15 05:41:26 crc kubenswrapper[4747]: I1215 05:41:26.690998 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 15 05:41:30 crc kubenswrapper[4747]: I1215 05:41:30.460180 4747 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 15 05:41:31 crc kubenswrapper[4747]: I1215 05:41:31.859346 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 15 05:41:32 crc kubenswrapper[4747]: I1215 05:41:32.640474 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 15 05:41:33 crc kubenswrapper[4747]: I1215 05:41:33.206905 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 15 05:41:33 crc kubenswrapper[4747]: I1215 05:41:33.406307 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 15 05:41:33 crc kubenswrapper[4747]: I1215 05:41:33.555534 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 15 05:41:33 crc kubenswrapper[4747]: I1215 05:41:33.677605 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 15 05:41:34 crc kubenswrapper[4747]: I1215 05:41:34.076717 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 15 05:41:34 crc kubenswrapper[4747]: I1215 05:41:34.193799 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 15 05:41:34 crc kubenswrapper[4747]: I1215 05:41:34.422218 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 15 05:41:34 crc kubenswrapper[4747]: I1215 05:41:34.571162 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 15 05:41:34 crc kubenswrapper[4747]: I1215 05:41:34.614211 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 15 05:41:34 crc kubenswrapper[4747]: I1215 05:41:34.622614 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 15 05:41:34 crc kubenswrapper[4747]: I1215 05:41:34.774536 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 15 05:41:34 crc kubenswrapper[4747]: I1215 05:41:34.808843 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 15 05:41:34 crc kubenswrapper[4747]: I1215 05:41:34.964403 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 15 05:41:35 crc kubenswrapper[4747]: I1215 05:41:35.010866 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 15 05:41:35 crc kubenswrapper[4747]: I1215 05:41:35.190361 4747 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 15 05:41:35 crc kubenswrapper[4747]: I1215 05:41:35.221425 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 15 05:41:35 crc kubenswrapper[4747]: I1215 05:41:35.244424 4747 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 15 05:41:35 crc kubenswrapper[4747]: I1215 05:41:35.247430 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=32.247402389 podStartE2EDuration="32.247402389s" podCreationTimestamp="2025-12-15 05:41:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:41:22.180752964 +0000 UTC m=+245.877264882" watchObservedRunningTime="2025-12-15 05:41:35.247402389 +0000 UTC m=+258.943914305" Dec 15 05:41:35 crc kubenswrapper[4747]: I1215 05:41:35.251645 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 15 05:41:35 crc kubenswrapper[4747]: I1215 05:41:35.251707 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 15 05:41:35 crc kubenswrapper[4747]: I1215 05:41:35.255677 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 15 05:41:35 crc kubenswrapper[4747]: I1215 05:41:35.267861 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=13.267845795 podStartE2EDuration="13.267845795s" podCreationTimestamp="2025-12-15 05:41:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:41:35.265643616 +0000 UTC m=+258.962155533" watchObservedRunningTime="2025-12-15 05:41:35.267845795 +0000 UTC m=+258.964357712" Dec 15 05:41:35 crc kubenswrapper[4747]: I1215 05:41:35.525312 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 15 05:41:35 crc kubenswrapper[4747]: I1215 05:41:35.699781 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 15 05:41:35 crc kubenswrapper[4747]: I1215 05:41:35.738348 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 15 05:41:35 crc kubenswrapper[4747]: I1215 05:41:35.747872 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 15 05:41:35 crc kubenswrapper[4747]: I1215 05:41:35.830457 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 15 05:41:35 crc kubenswrapper[4747]: I1215 05:41:35.894983 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 15 05:41:35 crc kubenswrapper[4747]: I1215 05:41:35.906415 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 15 05:41:36 crc kubenswrapper[4747]: I1215 05:41:36.278854 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 15 05:41:36 crc kubenswrapper[4747]: I1215 05:41:36.744232 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 15 05:41:37 crc kubenswrapper[4747]: I1215 05:41:37.098027 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 15 05:41:37 crc kubenswrapper[4747]: I1215 05:41:37.099796 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 15 05:41:37 crc kubenswrapper[4747]: I1215 05:41:37.169781 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 15 05:41:37 crc kubenswrapper[4747]: I1215 05:41:37.183155 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 15 05:41:37 crc kubenswrapper[4747]: I1215 05:41:37.258837 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 15 05:41:37 crc kubenswrapper[4747]: I1215 05:41:37.271578 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 15 05:41:37 crc kubenswrapper[4747]: I1215 05:41:37.297992 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 15 05:41:37 crc kubenswrapper[4747]: I1215 05:41:37.383752 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 15 05:41:37 crc kubenswrapper[4747]: I1215 05:41:37.655377 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 15 05:41:37 crc kubenswrapper[4747]: I1215 05:41:37.771912 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 15 05:41:37 crc kubenswrapper[4747]: I1215 05:41:37.793129 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 15 05:41:37 crc kubenswrapper[4747]: I1215 05:41:37.800139 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 15 05:41:37 crc kubenswrapper[4747]: I1215 05:41:37.816562 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 15 05:41:37 crc kubenswrapper[4747]: I1215 05:41:37.907702 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 15 05:41:37 crc kubenswrapper[4747]: I1215 05:41:37.923952 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 15 05:41:37 crc kubenswrapper[4747]: I1215 05:41:37.950550 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 15 05:41:37 crc kubenswrapper[4747]: I1215 05:41:37.997552 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 15 05:41:38 crc kubenswrapper[4747]: I1215 05:41:38.156570 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 15 05:41:38 crc kubenswrapper[4747]: I1215 05:41:38.189047 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 15 05:41:38 crc kubenswrapper[4747]: I1215 05:41:38.233580 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 15 05:41:38 crc kubenswrapper[4747]: I1215 05:41:38.471629 4747 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 15 05:41:38 crc kubenswrapper[4747]: I1215 05:41:38.533638 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 15 05:41:38 crc kubenswrapper[4747]: I1215 05:41:38.566795 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 15 05:41:38 crc kubenswrapper[4747]: I1215 05:41:38.598352 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 15 05:41:38 crc kubenswrapper[4747]: I1215 05:41:38.603800 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 15 05:41:38 crc kubenswrapper[4747]: I1215 05:41:38.686641 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 15 05:41:38 crc kubenswrapper[4747]: I1215 05:41:38.701144 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 15 05:41:38 crc kubenswrapper[4747]: I1215 05:41:38.903618 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 15 05:41:38 crc kubenswrapper[4747]: I1215 05:41:38.951440 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 15 05:41:39 crc kubenswrapper[4747]: I1215 05:41:39.164455 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 15 05:41:39 crc kubenswrapper[4747]: I1215 05:41:39.165915 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 15 05:41:39 crc kubenswrapper[4747]: I1215 05:41:39.270772 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 15 05:41:39 crc kubenswrapper[4747]: I1215 05:41:39.316383 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 15 05:41:39 crc kubenswrapper[4747]: I1215 05:41:39.331196 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 15 05:41:39 crc kubenswrapper[4747]: I1215 05:41:39.332598 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 15 05:41:39 crc kubenswrapper[4747]: I1215 05:41:39.356419 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 15 05:41:39 crc kubenswrapper[4747]: I1215 05:41:39.369530 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 15 05:41:39 crc kubenswrapper[4747]: I1215 05:41:39.615945 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 15 05:41:39 crc kubenswrapper[4747]: I1215 05:41:39.711689 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 15 05:41:39 crc kubenswrapper[4747]: I1215 05:41:39.859666 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 15 05:41:39 crc kubenswrapper[4747]: I1215 05:41:39.864171 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 15 05:41:39 crc kubenswrapper[4747]: I1215 05:41:39.865093 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 15 05:41:40 crc kubenswrapper[4747]: I1215 05:41:40.116537 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 15 05:41:40 crc kubenswrapper[4747]: I1215 05:41:40.127731 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 15 05:41:40 crc kubenswrapper[4747]: I1215 05:41:40.147377 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 15 05:41:40 crc kubenswrapper[4747]: I1215 05:41:40.177819 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 15 05:41:40 crc kubenswrapper[4747]: I1215 05:41:40.234566 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 15 05:41:40 crc kubenswrapper[4747]: I1215 05:41:40.360611 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 15 05:41:40 crc kubenswrapper[4747]: I1215 05:41:40.478366 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 15 05:41:40 crc kubenswrapper[4747]: I1215 05:41:40.596233 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 15 05:41:40 crc kubenswrapper[4747]: I1215 05:41:40.685478 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 15 05:41:40 crc kubenswrapper[4747]: I1215 05:41:40.695118 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 15 05:41:40 crc kubenswrapper[4747]: I1215 05:41:40.758527 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 15 05:41:40 crc kubenswrapper[4747]: I1215 05:41:40.768885 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 15 05:41:40 crc kubenswrapper[4747]: I1215 05:41:40.808766 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 15 05:41:40 crc kubenswrapper[4747]: I1215 05:41:40.812355 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 15 05:41:41 crc kubenswrapper[4747]: I1215 05:41:41.026692 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 15 05:41:41 crc kubenswrapper[4747]: I1215 05:41:41.032037 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 15 05:41:41 crc kubenswrapper[4747]: I1215 05:41:41.188065 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 15 05:41:41 crc kubenswrapper[4747]: I1215 05:41:41.296855 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 15 05:41:41 crc kubenswrapper[4747]: I1215 05:41:41.411804 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 15 05:41:41 crc kubenswrapper[4747]: I1215 05:41:41.437965 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 15 05:41:41 crc kubenswrapper[4747]: I1215 05:41:41.440355 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 15 05:41:41 crc kubenswrapper[4747]: I1215 05:41:41.487192 4747 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 15 05:41:41 crc kubenswrapper[4747]: I1215 05:41:41.504769 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 15 05:41:41 crc kubenswrapper[4747]: I1215 05:41:41.513166 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 15 05:41:41 crc kubenswrapper[4747]: I1215 05:41:41.547791 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 15 05:41:41 crc kubenswrapper[4747]: I1215 05:41:41.548729 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 15 05:41:41 crc kubenswrapper[4747]: I1215 05:41:41.589111 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 15 05:41:41 crc kubenswrapper[4747]: I1215 05:41:41.618855 4747 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 15 05:41:41 crc kubenswrapper[4747]: I1215 05:41:41.672270 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 15 05:41:41 crc kubenswrapper[4747]: I1215 05:41:41.734389 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 15 05:41:41 crc kubenswrapper[4747]: I1215 05:41:41.854075 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 15 05:41:41 crc kubenswrapper[4747]: I1215 05:41:41.854213 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 15 05:41:41 crc kubenswrapper[4747]: I1215 05:41:41.855897 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 15 05:41:41 crc kubenswrapper[4747]: I1215 05:41:41.927673 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 15 05:41:41 crc kubenswrapper[4747]: I1215 05:41:41.947087 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 15 05:41:41 crc kubenswrapper[4747]: I1215 05:41:41.972659 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 15 05:41:42 crc kubenswrapper[4747]: I1215 05:41:42.029023 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 15 05:41:42 crc kubenswrapper[4747]: I1215 05:41:42.050838 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 15 05:41:42 crc kubenswrapper[4747]: I1215 05:41:42.103035 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 15 05:41:42 crc kubenswrapper[4747]: I1215 05:41:42.166993 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 15 05:41:42 crc kubenswrapper[4747]: I1215 05:41:42.239077 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 15 05:41:42 crc kubenswrapper[4747]: I1215 05:41:42.240858 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 15 05:41:42 crc kubenswrapper[4747]: I1215 05:41:42.377180 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 15 05:41:42 crc kubenswrapper[4747]: I1215 05:41:42.427604 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 15 05:41:42 crc kubenswrapper[4747]: I1215 05:41:42.446917 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 15 05:41:42 crc kubenswrapper[4747]: I1215 05:41:42.599866 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 15 05:41:42 crc kubenswrapper[4747]: I1215 05:41:42.612739 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 15 05:41:42 crc kubenswrapper[4747]: I1215 05:41:42.628598 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 15 05:41:42 crc kubenswrapper[4747]: I1215 05:41:42.645443 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 15 05:41:42 crc kubenswrapper[4747]: I1215 05:41:42.709198 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 15 05:41:42 crc kubenswrapper[4747]: I1215 05:41:42.801028 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 15 05:41:42 crc kubenswrapper[4747]: I1215 05:41:42.856827 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 15 05:41:42 crc kubenswrapper[4747]: I1215 05:41:42.868758 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 15 05:41:42 crc kubenswrapper[4747]: I1215 05:41:42.963916 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 15 05:41:42 crc kubenswrapper[4747]: I1215 05:41:42.985964 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 15 05:41:43 crc kubenswrapper[4747]: I1215 05:41:43.155305 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 15 05:41:43 crc kubenswrapper[4747]: I1215 05:41:43.174003 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 15 05:41:43 crc kubenswrapper[4747]: I1215 05:41:43.195272 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 15 05:41:43 crc kubenswrapper[4747]: I1215 05:41:43.198107 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 15 05:41:43 crc kubenswrapper[4747]: I1215 05:41:43.251553 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 15 05:41:43 crc kubenswrapper[4747]: I1215 05:41:43.260109 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 15 05:41:43 crc kubenswrapper[4747]: I1215 05:41:43.291318 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 15 05:41:43 crc kubenswrapper[4747]: I1215 05:41:43.303539 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 15 05:41:43 crc kubenswrapper[4747]: I1215 05:41:43.304089 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 15 05:41:43 crc kubenswrapper[4747]: I1215 05:41:43.361202 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 15 05:41:43 crc kubenswrapper[4747]: I1215 05:41:43.419192 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 15 05:41:43 crc kubenswrapper[4747]: I1215 05:41:43.496351 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 15 05:41:43 crc kubenswrapper[4747]: I1215 05:41:43.559801 4747 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 15 05:41:43 crc kubenswrapper[4747]: I1215 05:41:43.560174 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://e07df00ce4f7fbe290c00e9569fa660afea17327866ebdadbe609ffb309949a2" gracePeriod=5 Dec 15 05:41:43 crc kubenswrapper[4747]: I1215 05:41:43.602547 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 15 05:41:43 crc kubenswrapper[4747]: I1215 05:41:43.837690 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 15 05:41:43 crc kubenswrapper[4747]: I1215 05:41:43.890596 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 15 05:41:43 crc kubenswrapper[4747]: I1215 05:41:43.981348 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 15 05:41:44 crc kubenswrapper[4747]: I1215 05:41:44.016746 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 15 05:41:44 crc kubenswrapper[4747]: I1215 05:41:44.022019 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 15 05:41:44 crc kubenswrapper[4747]: I1215 05:41:44.055777 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 15 05:41:44 crc kubenswrapper[4747]: I1215 05:41:44.096043 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 15 05:41:44 crc kubenswrapper[4747]: I1215 05:41:44.139120 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 15 05:41:44 crc kubenswrapper[4747]: I1215 05:41:44.150167 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 15 05:41:44 crc kubenswrapper[4747]: I1215 05:41:44.163222 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 15 05:41:44 crc kubenswrapper[4747]: I1215 05:41:44.175050 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 15 05:41:44 crc kubenswrapper[4747]: I1215 05:41:44.210175 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 15 05:41:44 crc kubenswrapper[4747]: I1215 05:41:44.244638 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 15 05:41:44 crc kubenswrapper[4747]: I1215 05:41:44.310270 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 15 05:41:44 crc kubenswrapper[4747]: I1215 05:41:44.345356 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 15 05:41:44 crc kubenswrapper[4747]: I1215 05:41:44.380960 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 15 05:41:44 crc kubenswrapper[4747]: I1215 05:41:44.382661 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 15 05:41:44 crc kubenswrapper[4747]: I1215 05:41:44.477224 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 15 05:41:44 crc kubenswrapper[4747]: I1215 05:41:44.495620 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 15 05:41:44 crc kubenswrapper[4747]: I1215 05:41:44.653664 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 15 05:41:44 crc kubenswrapper[4747]: I1215 05:41:44.683509 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 15 05:41:44 crc kubenswrapper[4747]: I1215 05:41:44.702361 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 15 05:41:44 crc kubenswrapper[4747]: I1215 05:41:44.927093 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 15 05:41:44 crc kubenswrapper[4747]: I1215 05:41:44.978187 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 15 05:41:45 crc kubenswrapper[4747]: I1215 05:41:45.041139 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 15 05:41:45 crc kubenswrapper[4747]: I1215 05:41:45.146767 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 15 05:41:45 crc kubenswrapper[4747]: I1215 05:41:45.192060 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 15 05:41:45 crc kubenswrapper[4747]: I1215 05:41:45.202254 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 15 05:41:45 crc kubenswrapper[4747]: I1215 05:41:45.251682 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 15 05:41:45 crc kubenswrapper[4747]: I1215 05:41:45.260504 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 15 05:41:45 crc kubenswrapper[4747]: I1215 05:41:45.280277 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 15 05:41:45 crc kubenswrapper[4747]: I1215 05:41:45.413755 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 15 05:41:45 crc kubenswrapper[4747]: I1215 05:41:45.419695 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 15 05:41:45 crc kubenswrapper[4747]: I1215 05:41:45.548713 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 15 05:41:45 crc kubenswrapper[4747]: I1215 05:41:45.586421 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 15 05:41:45 crc kubenswrapper[4747]: I1215 05:41:45.596800 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 15 05:41:45 crc kubenswrapper[4747]: I1215 05:41:45.684803 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 15 05:41:45 crc kubenswrapper[4747]: I1215 05:41:45.708716 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 15 05:41:45 crc kubenswrapper[4747]: I1215 05:41:45.722439 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 15 05:41:45 crc kubenswrapper[4747]: I1215 05:41:45.762453 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 15 05:41:45 crc kubenswrapper[4747]: I1215 05:41:45.805165 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 15 05:41:45 crc kubenswrapper[4747]: I1215 05:41:45.850173 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 15 05:41:46 crc kubenswrapper[4747]: I1215 05:41:46.026917 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 15 05:41:46 crc kubenswrapper[4747]: I1215 05:41:46.055688 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 15 05:41:46 crc kubenswrapper[4747]: I1215 05:41:46.068114 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 15 05:41:46 crc kubenswrapper[4747]: I1215 05:41:46.144233 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 15 05:41:46 crc kubenswrapper[4747]: I1215 05:41:46.151907 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 15 05:41:46 crc kubenswrapper[4747]: I1215 05:41:46.234646 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 15 05:41:46 crc kubenswrapper[4747]: I1215 05:41:46.246206 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 15 05:41:46 crc kubenswrapper[4747]: I1215 05:41:46.425010 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 15 05:41:46 crc kubenswrapper[4747]: I1215 05:41:46.449844 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 15 05:41:46 crc kubenswrapper[4747]: I1215 05:41:46.457314 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 15 05:41:46 crc kubenswrapper[4747]: I1215 05:41:46.572643 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 15 05:41:46 crc kubenswrapper[4747]: I1215 05:41:46.577654 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 15 05:41:46 crc kubenswrapper[4747]: I1215 05:41:46.621487 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 15 05:41:46 crc kubenswrapper[4747]: I1215 05:41:46.652018 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 15 05:41:46 crc kubenswrapper[4747]: I1215 05:41:46.655803 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 15 05:41:46 crc kubenswrapper[4747]: I1215 05:41:46.673664 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 15 05:41:46 crc kubenswrapper[4747]: I1215 05:41:46.682747 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 15 05:41:46 crc kubenswrapper[4747]: I1215 05:41:46.758471 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 15 05:41:46 crc kubenswrapper[4747]: I1215 05:41:46.861871 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 15 05:41:46 crc kubenswrapper[4747]: I1215 05:41:46.889691 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 15 05:41:46 crc kubenswrapper[4747]: I1215 05:41:46.957738 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 15 05:41:46 crc kubenswrapper[4747]: I1215 05:41:46.959749 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 15 05:41:46 crc kubenswrapper[4747]: I1215 05:41:46.969664 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 15 05:41:46 crc kubenswrapper[4747]: I1215 05:41:46.989400 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 15 05:41:46 crc kubenswrapper[4747]: I1215 05:41:46.999572 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 15 05:41:47 crc kubenswrapper[4747]: I1215 05:41:47.048164 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 15 05:41:47 crc kubenswrapper[4747]: I1215 05:41:47.067823 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 15 05:41:47 crc kubenswrapper[4747]: I1215 05:41:47.106962 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 15 05:41:47 crc kubenswrapper[4747]: I1215 05:41:47.159388 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 15 05:41:47 crc kubenswrapper[4747]: I1215 05:41:47.215786 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 15 05:41:47 crc kubenswrapper[4747]: I1215 05:41:47.270298 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 15 05:41:47 crc kubenswrapper[4747]: I1215 05:41:47.317532 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 15 05:41:47 crc kubenswrapper[4747]: I1215 05:41:47.326644 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 15 05:41:47 crc kubenswrapper[4747]: I1215 05:41:47.326784 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 15 05:41:47 crc kubenswrapper[4747]: I1215 05:41:47.577324 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 15 05:41:47 crc kubenswrapper[4747]: I1215 05:41:47.630167 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 15 05:41:47 crc kubenswrapper[4747]: I1215 05:41:47.630315 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 15 05:41:47 crc kubenswrapper[4747]: I1215 05:41:47.757156 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 15 05:41:47 crc kubenswrapper[4747]: I1215 05:41:47.763278 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 15 05:41:47 crc kubenswrapper[4747]: I1215 05:41:47.788286 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 15 05:41:47 crc kubenswrapper[4747]: I1215 05:41:47.871440 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 15 05:41:47 crc kubenswrapper[4747]: I1215 05:41:47.956949 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 15 05:41:47 crc kubenswrapper[4747]: I1215 05:41:47.992206 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 15 05:41:48 crc kubenswrapper[4747]: I1215 05:41:48.023843 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 15 05:41:48 crc kubenswrapper[4747]: I1215 05:41:48.059014 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 15 05:41:48 crc kubenswrapper[4747]: I1215 05:41:48.116639 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 15 05:41:48 crc kubenswrapper[4747]: I1215 05:41:48.216499 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 15 05:41:48 crc kubenswrapper[4747]: I1215 05:41:48.290044 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 15 05:41:48 crc kubenswrapper[4747]: I1215 05:41:48.407378 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 15 05:41:48 crc kubenswrapper[4747]: I1215 05:41:48.515544 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 15 05:41:48 crc kubenswrapper[4747]: I1215 05:41:48.594852 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 15 05:41:48 crc kubenswrapper[4747]: I1215 05:41:48.664098 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 15 05:41:48 crc kubenswrapper[4747]: I1215 05:41:48.716883 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 15 05:41:48 crc kubenswrapper[4747]: I1215 05:41:48.795348 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 15 05:41:48 crc kubenswrapper[4747]: I1215 05:41:48.795408 4747 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="e07df00ce4f7fbe290c00e9569fa660afea17327866ebdadbe609ffb309949a2" exitCode=137 Dec 15 05:41:48 crc kubenswrapper[4747]: I1215 05:41:48.799020 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 15 05:41:48 crc kubenswrapper[4747]: I1215 05:41:48.799323 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 15 05:41:48 crc kubenswrapper[4747]: I1215 05:41:48.800019 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 15 05:41:48 crc kubenswrapper[4747]: I1215 05:41:48.840565 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 15 05:41:48 crc kubenswrapper[4747]: I1215 05:41:48.924311 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 15 05:41:49 crc kubenswrapper[4747]: I1215 05:41:49.017360 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 15 05:41:49 crc kubenswrapper[4747]: I1215 05:41:49.057905 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 15 05:41:49 crc kubenswrapper[4747]: I1215 05:41:49.071258 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 15 05:41:49 crc kubenswrapper[4747]: I1215 05:41:49.105095 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 15 05:41:49 crc kubenswrapper[4747]: I1215 05:41:49.110464 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 15 05:41:49 crc kubenswrapper[4747]: I1215 05:41:49.110526 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 15 05:41:49 crc kubenswrapper[4747]: I1215 05:41:49.224313 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 15 05:41:49 crc kubenswrapper[4747]: I1215 05:41:49.224357 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 15 05:41:49 crc kubenswrapper[4747]: I1215 05:41:49.224428 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 15 05:41:49 crc kubenswrapper[4747]: I1215 05:41:49.224449 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 15 05:41:49 crc kubenswrapper[4747]: I1215 05:41:49.224504 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 15 05:41:49 crc kubenswrapper[4747]: I1215 05:41:49.224710 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 15 05:41:49 crc kubenswrapper[4747]: I1215 05:41:49.224798 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 15 05:41:49 crc kubenswrapper[4747]: I1215 05:41:49.224840 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 15 05:41:49 crc kubenswrapper[4747]: I1215 05:41:49.224865 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 15 05:41:49 crc kubenswrapper[4747]: I1215 05:41:49.231738 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 15 05:41:49 crc kubenswrapper[4747]: I1215 05:41:49.325843 4747 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 15 05:41:49 crc kubenswrapper[4747]: I1215 05:41:49.325867 4747 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 15 05:41:49 crc kubenswrapper[4747]: I1215 05:41:49.325877 4747 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 15 05:41:49 crc kubenswrapper[4747]: I1215 05:41:49.325889 4747 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 15 05:41:49 crc kubenswrapper[4747]: I1215 05:41:49.325897 4747 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 15 05:41:49 crc kubenswrapper[4747]: I1215 05:41:49.566654 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 15 05:41:49 crc kubenswrapper[4747]: I1215 05:41:49.801073 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 15 05:41:49 crc kubenswrapper[4747]: I1215 05:41:49.801612 4747 scope.go:117] "RemoveContainer" containerID="e07df00ce4f7fbe290c00e9569fa660afea17327866ebdadbe609ffb309949a2" Dec 15 05:41:49 crc kubenswrapper[4747]: I1215 05:41:49.801715 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 15 05:41:49 crc kubenswrapper[4747]: I1215 05:41:49.963709 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 15 05:41:50 crc kubenswrapper[4747]: I1215 05:41:50.252229 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 15 05:41:50 crc kubenswrapper[4747]: I1215 05:41:50.254512 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 15 05:41:50 crc kubenswrapper[4747]: I1215 05:41:50.508433 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 15 05:41:50 crc kubenswrapper[4747]: I1215 05:41:50.599305 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 15 05:41:50 crc kubenswrapper[4747]: I1215 05:41:50.636503 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 15 05:41:50 crc kubenswrapper[4747]: I1215 05:41:50.636809 4747 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Dec 15 05:41:50 crc kubenswrapper[4747]: I1215 05:41:50.647521 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 15 05:41:50 crc kubenswrapper[4747]: I1215 05:41:50.647553 4747 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="ae7db51a-dfef-4aee-ba06-9b963d12daa4" Dec 15 05:41:50 crc kubenswrapper[4747]: I1215 05:41:50.650171 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 15 05:41:50 crc kubenswrapper[4747]: I1215 05:41:50.650200 4747 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="ae7db51a-dfef-4aee-ba06-9b963d12daa4" Dec 15 05:41:51 crc kubenswrapper[4747]: I1215 05:41:51.538436 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 15 05:41:51 crc kubenswrapper[4747]: I1215 05:41:51.703589 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 15 05:41:51 crc kubenswrapper[4747]: I1215 05:41:51.902736 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 15 05:41:52 crc kubenswrapper[4747]: I1215 05:41:52.830762 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 15 05:42:14 crc kubenswrapper[4747]: I1215 05:42:14.648850 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cjc2b"] Dec 15 05:42:14 crc kubenswrapper[4747]: I1215 05:42:14.649719 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-cjc2b" podUID="fd50242e-74be-4e24-9e3c-121196f60867" containerName="controller-manager" containerID="cri-o://831a7d6cd9e1da123696edc9a56beceaa4eda8cd2203f9880f12abfae91b67f5" gracePeriod=30 Dec 15 05:42:14 crc kubenswrapper[4747]: I1215 05:42:14.651715 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bddlq"] Dec 15 05:42:14 crc kubenswrapper[4747]: I1215 05:42:14.929688 4747 generic.go:334] "Generic (PLEG): container finished" podID="fd50242e-74be-4e24-9e3c-121196f60867" containerID="831a7d6cd9e1da123696edc9a56beceaa4eda8cd2203f9880f12abfae91b67f5" exitCode=0 Dec 15 05:42:14 crc kubenswrapper[4747]: I1215 05:42:14.930054 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bddlq" podUID="8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d" containerName="route-controller-manager" containerID="cri-o://6d68ac697094cc0bd7f3afcb31943ad19bf97c99a320beb1bd13bf2d0f5ab69d" gracePeriod=30 Dec 15 05:42:14 crc kubenswrapper[4747]: I1215 05:42:14.929755 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cjc2b" event={"ID":"fd50242e-74be-4e24-9e3c-121196f60867","Type":"ContainerDied","Data":"831a7d6cd9e1da123696edc9a56beceaa4eda8cd2203f9880f12abfae91b67f5"} Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.025600 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cjc2b" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.114871 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7fdc9849d6-4gg57"] Dec 15 05:42:15 crc kubenswrapper[4747]: E1215 05:42:15.115120 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd50242e-74be-4e24-9e3c-121196f60867" containerName="controller-manager" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.115139 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd50242e-74be-4e24-9e3c-121196f60867" containerName="controller-manager" Dec 15 05:42:15 crc kubenswrapper[4747]: E1215 05:42:15.115153 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.115159 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 15 05:42:15 crc kubenswrapper[4747]: E1215 05:42:15.115171 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ccdb417-62a2-4f3a-8b63-742cfee41cde" containerName="installer" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.115176 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ccdb417-62a2-4f3a-8b63-742cfee41cde" containerName="installer" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.115265 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd50242e-74be-4e24-9e3c-121196f60867" containerName="controller-manager" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.115274 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.115283 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ccdb417-62a2-4f3a-8b63-742cfee41cde" containerName="installer" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.115666 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fdc9849d6-4gg57" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.126895 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7fdc9849d6-4gg57"] Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.218430 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd50242e-74be-4e24-9e3c-121196f60867-serving-cert\") pod \"fd50242e-74be-4e24-9e3c-121196f60867\" (UID: \"fd50242e-74be-4e24-9e3c-121196f60867\") " Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.218493 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd50242e-74be-4e24-9e3c-121196f60867-config\") pod \"fd50242e-74be-4e24-9e3c-121196f60867\" (UID: \"fd50242e-74be-4e24-9e3c-121196f60867\") " Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.218525 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fd50242e-74be-4e24-9e3c-121196f60867-proxy-ca-bundles\") pod \"fd50242e-74be-4e24-9e3c-121196f60867\" (UID: \"fd50242e-74be-4e24-9e3c-121196f60867\") " Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.218603 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdbrg\" (UniqueName: \"kubernetes.io/projected/fd50242e-74be-4e24-9e3c-121196f60867-kube-api-access-zdbrg\") pod \"fd50242e-74be-4e24-9e3c-121196f60867\" (UID: \"fd50242e-74be-4e24-9e3c-121196f60867\") " Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.218625 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd50242e-74be-4e24-9e3c-121196f60867-client-ca\") pod \"fd50242e-74be-4e24-9e3c-121196f60867\" (UID: \"fd50242e-74be-4e24-9e3c-121196f60867\") " Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.218812 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9408704c-13de-4028-aec6-d8db878ae765-proxy-ca-bundles\") pod \"controller-manager-7fdc9849d6-4gg57\" (UID: \"9408704c-13de-4028-aec6-d8db878ae765\") " pod="openshift-controller-manager/controller-manager-7fdc9849d6-4gg57" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.218855 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9408704c-13de-4028-aec6-d8db878ae765-serving-cert\") pod \"controller-manager-7fdc9849d6-4gg57\" (UID: \"9408704c-13de-4028-aec6-d8db878ae765\") " pod="openshift-controller-manager/controller-manager-7fdc9849d6-4gg57" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.218877 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9408704c-13de-4028-aec6-d8db878ae765-config\") pod \"controller-manager-7fdc9849d6-4gg57\" (UID: \"9408704c-13de-4028-aec6-d8db878ae765\") " pod="openshift-controller-manager/controller-manager-7fdc9849d6-4gg57" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.218911 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvh7n\" (UniqueName: \"kubernetes.io/projected/9408704c-13de-4028-aec6-d8db878ae765-kube-api-access-gvh7n\") pod \"controller-manager-7fdc9849d6-4gg57\" (UID: \"9408704c-13de-4028-aec6-d8db878ae765\") " pod="openshift-controller-manager/controller-manager-7fdc9849d6-4gg57" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.219020 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9408704c-13de-4028-aec6-d8db878ae765-client-ca\") pod \"controller-manager-7fdc9849d6-4gg57\" (UID: \"9408704c-13de-4028-aec6-d8db878ae765\") " pod="openshift-controller-manager/controller-manager-7fdc9849d6-4gg57" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.225173 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd50242e-74be-4e24-9e3c-121196f60867-client-ca" (OuterVolumeSpecName: "client-ca") pod "fd50242e-74be-4e24-9e3c-121196f60867" (UID: "fd50242e-74be-4e24-9e3c-121196f60867"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.225236 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd50242e-74be-4e24-9e3c-121196f60867-config" (OuterVolumeSpecName: "config") pod "fd50242e-74be-4e24-9e3c-121196f60867" (UID: "fd50242e-74be-4e24-9e3c-121196f60867"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.225615 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd50242e-74be-4e24-9e3c-121196f60867-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "fd50242e-74be-4e24-9e3c-121196f60867" (UID: "fd50242e-74be-4e24-9e3c-121196f60867"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.231359 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd50242e-74be-4e24-9e3c-121196f60867-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fd50242e-74be-4e24-9e3c-121196f60867" (UID: "fd50242e-74be-4e24-9e3c-121196f60867"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.231472 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd50242e-74be-4e24-9e3c-121196f60867-kube-api-access-zdbrg" (OuterVolumeSpecName: "kube-api-access-zdbrg") pod "fd50242e-74be-4e24-9e3c-121196f60867" (UID: "fd50242e-74be-4e24-9e3c-121196f60867"). InnerVolumeSpecName "kube-api-access-zdbrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.266553 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bddlq" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.320119 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9408704c-13de-4028-aec6-d8db878ae765-client-ca\") pod \"controller-manager-7fdc9849d6-4gg57\" (UID: \"9408704c-13de-4028-aec6-d8db878ae765\") " pod="openshift-controller-manager/controller-manager-7fdc9849d6-4gg57" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.320204 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9408704c-13de-4028-aec6-d8db878ae765-proxy-ca-bundles\") pod \"controller-manager-7fdc9849d6-4gg57\" (UID: \"9408704c-13de-4028-aec6-d8db878ae765\") " pod="openshift-controller-manager/controller-manager-7fdc9849d6-4gg57" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.320244 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9408704c-13de-4028-aec6-d8db878ae765-serving-cert\") pod \"controller-manager-7fdc9849d6-4gg57\" (UID: \"9408704c-13de-4028-aec6-d8db878ae765\") " pod="openshift-controller-manager/controller-manager-7fdc9849d6-4gg57" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.320270 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9408704c-13de-4028-aec6-d8db878ae765-config\") pod \"controller-manager-7fdc9849d6-4gg57\" (UID: \"9408704c-13de-4028-aec6-d8db878ae765\") " pod="openshift-controller-manager/controller-manager-7fdc9849d6-4gg57" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.320306 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvh7n\" (UniqueName: \"kubernetes.io/projected/9408704c-13de-4028-aec6-d8db878ae765-kube-api-access-gvh7n\") pod \"controller-manager-7fdc9849d6-4gg57\" (UID: \"9408704c-13de-4028-aec6-d8db878ae765\") " pod="openshift-controller-manager/controller-manager-7fdc9849d6-4gg57" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.320419 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd50242e-74be-4e24-9e3c-121196f60867-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.320439 4747 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fd50242e-74be-4e24-9e3c-121196f60867-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.320453 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdbrg\" (UniqueName: \"kubernetes.io/projected/fd50242e-74be-4e24-9e3c-121196f60867-kube-api-access-zdbrg\") on node \"crc\" DevicePath \"\"" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.320464 4747 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd50242e-74be-4e24-9e3c-121196f60867-client-ca\") on node \"crc\" DevicePath \"\"" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.320477 4747 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd50242e-74be-4e24-9e3c-121196f60867-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.321161 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9408704c-13de-4028-aec6-d8db878ae765-client-ca\") pod \"controller-manager-7fdc9849d6-4gg57\" (UID: \"9408704c-13de-4028-aec6-d8db878ae765\") " pod="openshift-controller-manager/controller-manager-7fdc9849d6-4gg57" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.322533 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9408704c-13de-4028-aec6-d8db878ae765-config\") pod \"controller-manager-7fdc9849d6-4gg57\" (UID: \"9408704c-13de-4028-aec6-d8db878ae765\") " pod="openshift-controller-manager/controller-manager-7fdc9849d6-4gg57" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.333086 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9408704c-13de-4028-aec6-d8db878ae765-serving-cert\") pod \"controller-manager-7fdc9849d6-4gg57\" (UID: \"9408704c-13de-4028-aec6-d8db878ae765\") " pod="openshift-controller-manager/controller-manager-7fdc9849d6-4gg57" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.333783 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9408704c-13de-4028-aec6-d8db878ae765-proxy-ca-bundles\") pod \"controller-manager-7fdc9849d6-4gg57\" (UID: \"9408704c-13de-4028-aec6-d8db878ae765\") " pod="openshift-controller-manager/controller-manager-7fdc9849d6-4gg57" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.337392 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvh7n\" (UniqueName: \"kubernetes.io/projected/9408704c-13de-4028-aec6-d8db878ae765-kube-api-access-gvh7n\") pod \"controller-manager-7fdc9849d6-4gg57\" (UID: \"9408704c-13de-4028-aec6-d8db878ae765\") " pod="openshift-controller-manager/controller-manager-7fdc9849d6-4gg57" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.420867 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d-config\") pod \"8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d\" (UID: \"8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d\") " Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.420970 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d-client-ca\") pod \"8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d\" (UID: \"8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d\") " Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.421041 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d-serving-cert\") pod \"8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d\" (UID: \"8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d\") " Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.421074 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqwc4\" (UniqueName: \"kubernetes.io/projected/8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d-kube-api-access-gqwc4\") pod \"8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d\" (UID: \"8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d\") " Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.421420 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d-client-ca" (OuterVolumeSpecName: "client-ca") pod "8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d" (UID: "8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.421895 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d-config" (OuterVolumeSpecName: "config") pod "8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d" (UID: "8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.423521 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d" (UID: "8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.424189 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d-kube-api-access-gqwc4" (OuterVolumeSpecName: "kube-api-access-gqwc4") pod "8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d" (UID: "8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d"). InnerVolumeSpecName "kube-api-access-gqwc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.435367 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fdc9849d6-4gg57" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.523205 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.523237 4747 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d-client-ca\") on node \"crc\" DevicePath \"\"" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.523250 4747 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.523261 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqwc4\" (UniqueName: \"kubernetes.io/projected/8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d-kube-api-access-gqwc4\") on node \"crc\" DevicePath \"\"" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.601300 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7fdc9849d6-4gg57"] Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.943271 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7fdc9849d6-4gg57" event={"ID":"9408704c-13de-4028-aec6-d8db878ae765","Type":"ContainerStarted","Data":"972877489f18bbb9c8230ea8b33806f8b3e7c84bccd3955788f2ba578dd59455"} Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.943611 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7fdc9849d6-4gg57" event={"ID":"9408704c-13de-4028-aec6-d8db878ae765","Type":"ContainerStarted","Data":"784747b25d353ef3b8e3ee46f5a2f8df8b8d7ac288c77cd724f9a1784a66f19d"} Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.945154 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7fdc9849d6-4gg57" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.947436 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cjc2b" event={"ID":"fd50242e-74be-4e24-9e3c-121196f60867","Type":"ContainerDied","Data":"49f089e258d68c8ec9c82ebd29c0184878bd3b6e57ff0682c3469f16157a6d26"} Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.947469 4747 scope.go:117] "RemoveContainer" containerID="831a7d6cd9e1da123696edc9a56beceaa4eda8cd2203f9880f12abfae91b67f5" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.947558 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cjc2b" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.953669 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7fdc9849d6-4gg57" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.954180 4747 generic.go:334] "Generic (PLEG): container finished" podID="8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d" containerID="6d68ac697094cc0bd7f3afcb31943ad19bf97c99a320beb1bd13bf2d0f5ab69d" exitCode=0 Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.954205 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bddlq" event={"ID":"8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d","Type":"ContainerDied","Data":"6d68ac697094cc0bd7f3afcb31943ad19bf97c99a320beb1bd13bf2d0f5ab69d"} Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.954222 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bddlq" event={"ID":"8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d","Type":"ContainerDied","Data":"e7949581146ae66ff2adef154a63b9f3d81d8794aafb96baedfa4e832d4e4ba3"} Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.954229 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bddlq" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.961975 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7fdc9849d6-4gg57" podStartSLOduration=0.961965335 podStartE2EDuration="961.965335ms" podCreationTimestamp="2025-12-15 05:42:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:42:15.957278588 +0000 UTC m=+299.653790506" watchObservedRunningTime="2025-12-15 05:42:15.961965335 +0000 UTC m=+299.658477251" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.966685 4747 scope.go:117] "RemoveContainer" containerID="6d68ac697094cc0bd7f3afcb31943ad19bf97c99a320beb1bd13bf2d0f5ab69d" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.987878 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bddlq"] Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.989844 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bddlq"] Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.991175 4747 scope.go:117] "RemoveContainer" containerID="6d68ac697094cc0bd7f3afcb31943ad19bf97c99a320beb1bd13bf2d0f5ab69d" Dec 15 05:42:15 crc kubenswrapper[4747]: E1215 05:42:15.991608 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d68ac697094cc0bd7f3afcb31943ad19bf97c99a320beb1bd13bf2d0f5ab69d\": container with ID starting with 6d68ac697094cc0bd7f3afcb31943ad19bf97c99a320beb1bd13bf2d0f5ab69d not found: ID does not exist" containerID="6d68ac697094cc0bd7f3afcb31943ad19bf97c99a320beb1bd13bf2d0f5ab69d" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.991676 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d68ac697094cc0bd7f3afcb31943ad19bf97c99a320beb1bd13bf2d0f5ab69d"} err="failed to get container status \"6d68ac697094cc0bd7f3afcb31943ad19bf97c99a320beb1bd13bf2d0f5ab69d\": rpc error: code = NotFound desc = could not find container \"6d68ac697094cc0bd7f3afcb31943ad19bf97c99a320beb1bd13bf2d0f5ab69d\": container with ID starting with 6d68ac697094cc0bd7f3afcb31943ad19bf97c99a320beb1bd13bf2d0f5ab69d not found: ID does not exist" Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.996579 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cjc2b"] Dec 15 05:42:15 crc kubenswrapper[4747]: I1215 05:42:15.997482 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cjc2b"] Dec 15 05:42:16 crc kubenswrapper[4747]: I1215 05:42:16.513697 4747 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Dec 15 05:42:16 crc kubenswrapper[4747]: I1215 05:42:16.640739 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d" path="/var/lib/kubelet/pods/8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d/volumes" Dec 15 05:42:16 crc kubenswrapper[4747]: I1215 05:42:16.641574 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd50242e-74be-4e24-9e3c-121196f60867" path="/var/lib/kubelet/pods/fd50242e-74be-4e24-9e3c-121196f60867/volumes" Dec 15 05:42:16 crc kubenswrapper[4747]: I1215 05:42:16.998708 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cf466c567-d9fnf"] Dec 15 05:42:16 crc kubenswrapper[4747]: E1215 05:42:16.999053 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d" containerName="route-controller-manager" Dec 15 05:42:16 crc kubenswrapper[4747]: I1215 05:42:16.999073 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d" containerName="route-controller-manager" Dec 15 05:42:16 crc kubenswrapper[4747]: I1215 05:42:16.999227 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c412dfd-4cd4-47d5-ac92-59e7b59d6a2d" containerName="route-controller-manager" Dec 15 05:42:17 crc kubenswrapper[4747]: I1215 05:42:17.000661 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cf466c567-d9fnf" Dec 15 05:42:17 crc kubenswrapper[4747]: I1215 05:42:17.002965 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 15 05:42:17 crc kubenswrapper[4747]: I1215 05:42:17.003231 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 15 05:42:17 crc kubenswrapper[4747]: I1215 05:42:17.003396 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 15 05:42:17 crc kubenswrapper[4747]: I1215 05:42:17.003745 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cf466c567-d9fnf"] Dec 15 05:42:17 crc kubenswrapper[4747]: I1215 05:42:17.003872 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 15 05:42:17 crc kubenswrapper[4747]: I1215 05:42:17.004517 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 15 05:42:17 crc kubenswrapper[4747]: I1215 05:42:17.004862 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 15 05:42:17 crc kubenswrapper[4747]: I1215 05:42:17.150917 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce82cd9c-0674-4a4e-8ecf-925dfc855d53-serving-cert\") pod \"route-controller-manager-cf466c567-d9fnf\" (UID: \"ce82cd9c-0674-4a4e-8ecf-925dfc855d53\") " pod="openshift-route-controller-manager/route-controller-manager-cf466c567-d9fnf" Dec 15 05:42:17 crc kubenswrapper[4747]: I1215 05:42:17.150993 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce82cd9c-0674-4a4e-8ecf-925dfc855d53-config\") pod \"route-controller-manager-cf466c567-d9fnf\" (UID: \"ce82cd9c-0674-4a4e-8ecf-925dfc855d53\") " pod="openshift-route-controller-manager/route-controller-manager-cf466c567-d9fnf" Dec 15 05:42:17 crc kubenswrapper[4747]: I1215 05:42:17.151024 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nffpw\" (UniqueName: \"kubernetes.io/projected/ce82cd9c-0674-4a4e-8ecf-925dfc855d53-kube-api-access-nffpw\") pod \"route-controller-manager-cf466c567-d9fnf\" (UID: \"ce82cd9c-0674-4a4e-8ecf-925dfc855d53\") " pod="openshift-route-controller-manager/route-controller-manager-cf466c567-d9fnf" Dec 15 05:42:17 crc kubenswrapper[4747]: I1215 05:42:17.151060 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce82cd9c-0674-4a4e-8ecf-925dfc855d53-client-ca\") pod \"route-controller-manager-cf466c567-d9fnf\" (UID: \"ce82cd9c-0674-4a4e-8ecf-925dfc855d53\") " pod="openshift-route-controller-manager/route-controller-manager-cf466c567-d9fnf" Dec 15 05:42:17 crc kubenswrapper[4747]: I1215 05:42:17.253786 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce82cd9c-0674-4a4e-8ecf-925dfc855d53-serving-cert\") pod \"route-controller-manager-cf466c567-d9fnf\" (UID: \"ce82cd9c-0674-4a4e-8ecf-925dfc855d53\") " pod="openshift-route-controller-manager/route-controller-manager-cf466c567-d9fnf" Dec 15 05:42:17 crc kubenswrapper[4747]: I1215 05:42:17.253839 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce82cd9c-0674-4a4e-8ecf-925dfc855d53-config\") pod \"route-controller-manager-cf466c567-d9fnf\" (UID: \"ce82cd9c-0674-4a4e-8ecf-925dfc855d53\") " pod="openshift-route-controller-manager/route-controller-manager-cf466c567-d9fnf" Dec 15 05:42:17 crc kubenswrapper[4747]: I1215 05:42:17.253863 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nffpw\" (UniqueName: \"kubernetes.io/projected/ce82cd9c-0674-4a4e-8ecf-925dfc855d53-kube-api-access-nffpw\") pod \"route-controller-manager-cf466c567-d9fnf\" (UID: \"ce82cd9c-0674-4a4e-8ecf-925dfc855d53\") " pod="openshift-route-controller-manager/route-controller-manager-cf466c567-d9fnf" Dec 15 05:42:17 crc kubenswrapper[4747]: I1215 05:42:17.253887 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce82cd9c-0674-4a4e-8ecf-925dfc855d53-client-ca\") pod \"route-controller-manager-cf466c567-d9fnf\" (UID: \"ce82cd9c-0674-4a4e-8ecf-925dfc855d53\") " pod="openshift-route-controller-manager/route-controller-manager-cf466c567-d9fnf" Dec 15 05:42:17 crc kubenswrapper[4747]: I1215 05:42:17.255321 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce82cd9c-0674-4a4e-8ecf-925dfc855d53-client-ca\") pod \"route-controller-manager-cf466c567-d9fnf\" (UID: \"ce82cd9c-0674-4a4e-8ecf-925dfc855d53\") " pod="openshift-route-controller-manager/route-controller-manager-cf466c567-d9fnf" Dec 15 05:42:17 crc kubenswrapper[4747]: I1215 05:42:17.255604 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce82cd9c-0674-4a4e-8ecf-925dfc855d53-config\") pod \"route-controller-manager-cf466c567-d9fnf\" (UID: \"ce82cd9c-0674-4a4e-8ecf-925dfc855d53\") " pod="openshift-route-controller-manager/route-controller-manager-cf466c567-d9fnf" Dec 15 05:42:17 crc kubenswrapper[4747]: I1215 05:42:17.259621 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce82cd9c-0674-4a4e-8ecf-925dfc855d53-serving-cert\") pod \"route-controller-manager-cf466c567-d9fnf\" (UID: \"ce82cd9c-0674-4a4e-8ecf-925dfc855d53\") " pod="openshift-route-controller-manager/route-controller-manager-cf466c567-d9fnf" Dec 15 05:42:17 crc kubenswrapper[4747]: I1215 05:42:17.266490 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nffpw\" (UniqueName: \"kubernetes.io/projected/ce82cd9c-0674-4a4e-8ecf-925dfc855d53-kube-api-access-nffpw\") pod \"route-controller-manager-cf466c567-d9fnf\" (UID: \"ce82cd9c-0674-4a4e-8ecf-925dfc855d53\") " pod="openshift-route-controller-manager/route-controller-manager-cf466c567-d9fnf" Dec 15 05:42:17 crc kubenswrapper[4747]: I1215 05:42:17.314626 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cf466c567-d9fnf" Dec 15 05:42:17 crc kubenswrapper[4747]: I1215 05:42:17.674062 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cf466c567-d9fnf"] Dec 15 05:42:17 crc kubenswrapper[4747]: I1215 05:42:17.964596 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cf466c567-d9fnf" event={"ID":"ce82cd9c-0674-4a4e-8ecf-925dfc855d53","Type":"ContainerStarted","Data":"6abcc36ae574e063fe80a16d91f652f97ea2d70ecde740d105a17c7ec8c5d535"} Dec 15 05:42:17 crc kubenswrapper[4747]: I1215 05:42:17.964890 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-cf466c567-d9fnf" Dec 15 05:42:17 crc kubenswrapper[4747]: I1215 05:42:17.964970 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cf466c567-d9fnf" event={"ID":"ce82cd9c-0674-4a4e-8ecf-925dfc855d53","Type":"ContainerStarted","Data":"1f008aeae21c8901de2b4ff2d2d8790d8be1f5ef3453b8ad780127244b7562a0"} Dec 15 05:42:17 crc kubenswrapper[4747]: I1215 05:42:17.976336 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-cf466c567-d9fnf" podStartSLOduration=2.9763084810000002 podStartE2EDuration="2.976308481s" podCreationTimestamp="2025-12-15 05:42:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:42:17.9758927 +0000 UTC m=+301.672404617" watchObservedRunningTime="2025-12-15 05:42:17.976308481 +0000 UTC m=+301.672820398" Dec 15 05:42:18 crc kubenswrapper[4747]: I1215 05:42:18.096739 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-cf466c567-d9fnf" Dec 15 05:42:32 crc kubenswrapper[4747]: I1215 05:42:32.549678 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-2h42r"] Dec 15 05:42:32 crc kubenswrapper[4747]: I1215 05:42:32.550837 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-2h42r" Dec 15 05:42:32 crc kubenswrapper[4747]: I1215 05:42:32.569141 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-2h42r"] Dec 15 05:42:32 crc kubenswrapper[4747]: I1215 05:42:32.645788 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrttq\" (UniqueName: \"kubernetes.io/projected/50753f16-a4fa-49bf-b81e-8e2eff370b64-kube-api-access-wrttq\") pod \"image-registry-66df7c8f76-2h42r\" (UID: \"50753f16-a4fa-49bf-b81e-8e2eff370b64\") " pod="openshift-image-registry/image-registry-66df7c8f76-2h42r" Dec 15 05:42:32 crc kubenswrapper[4747]: I1215 05:42:32.645835 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/50753f16-a4fa-49bf-b81e-8e2eff370b64-bound-sa-token\") pod \"image-registry-66df7c8f76-2h42r\" (UID: \"50753f16-a4fa-49bf-b81e-8e2eff370b64\") " pod="openshift-image-registry/image-registry-66df7c8f76-2h42r" Dec 15 05:42:32 crc kubenswrapper[4747]: I1215 05:42:32.645865 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/50753f16-a4fa-49bf-b81e-8e2eff370b64-trusted-ca\") pod \"image-registry-66df7c8f76-2h42r\" (UID: \"50753f16-a4fa-49bf-b81e-8e2eff370b64\") " pod="openshift-image-registry/image-registry-66df7c8f76-2h42r" Dec 15 05:42:32 crc kubenswrapper[4747]: I1215 05:42:32.645975 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/50753f16-a4fa-49bf-b81e-8e2eff370b64-installation-pull-secrets\") pod \"image-registry-66df7c8f76-2h42r\" (UID: \"50753f16-a4fa-49bf-b81e-8e2eff370b64\") " pod="openshift-image-registry/image-registry-66df7c8f76-2h42r" Dec 15 05:42:32 crc kubenswrapper[4747]: I1215 05:42:32.646053 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/50753f16-a4fa-49bf-b81e-8e2eff370b64-registry-tls\") pod \"image-registry-66df7c8f76-2h42r\" (UID: \"50753f16-a4fa-49bf-b81e-8e2eff370b64\") " pod="openshift-image-registry/image-registry-66df7c8f76-2h42r" Dec 15 05:42:32 crc kubenswrapper[4747]: I1215 05:42:32.646087 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/50753f16-a4fa-49bf-b81e-8e2eff370b64-registry-certificates\") pod \"image-registry-66df7c8f76-2h42r\" (UID: \"50753f16-a4fa-49bf-b81e-8e2eff370b64\") " pod="openshift-image-registry/image-registry-66df7c8f76-2h42r" Dec 15 05:42:32 crc kubenswrapper[4747]: I1215 05:42:32.646116 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-2h42r\" (UID: \"50753f16-a4fa-49bf-b81e-8e2eff370b64\") " pod="openshift-image-registry/image-registry-66df7c8f76-2h42r" Dec 15 05:42:32 crc kubenswrapper[4747]: I1215 05:42:32.646146 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/50753f16-a4fa-49bf-b81e-8e2eff370b64-ca-trust-extracted\") pod \"image-registry-66df7c8f76-2h42r\" (UID: \"50753f16-a4fa-49bf-b81e-8e2eff370b64\") " pod="openshift-image-registry/image-registry-66df7c8f76-2h42r" Dec 15 05:42:32 crc kubenswrapper[4747]: I1215 05:42:32.665120 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-2h42r\" (UID: \"50753f16-a4fa-49bf-b81e-8e2eff370b64\") " pod="openshift-image-registry/image-registry-66df7c8f76-2h42r" Dec 15 05:42:32 crc kubenswrapper[4747]: I1215 05:42:32.747005 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/50753f16-a4fa-49bf-b81e-8e2eff370b64-bound-sa-token\") pod \"image-registry-66df7c8f76-2h42r\" (UID: \"50753f16-a4fa-49bf-b81e-8e2eff370b64\") " pod="openshift-image-registry/image-registry-66df7c8f76-2h42r" Dec 15 05:42:32 crc kubenswrapper[4747]: I1215 05:42:32.747059 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/50753f16-a4fa-49bf-b81e-8e2eff370b64-trusted-ca\") pod \"image-registry-66df7c8f76-2h42r\" (UID: \"50753f16-a4fa-49bf-b81e-8e2eff370b64\") " pod="openshift-image-registry/image-registry-66df7c8f76-2h42r" Dec 15 05:42:32 crc kubenswrapper[4747]: I1215 05:42:32.747101 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/50753f16-a4fa-49bf-b81e-8e2eff370b64-installation-pull-secrets\") pod \"image-registry-66df7c8f76-2h42r\" (UID: \"50753f16-a4fa-49bf-b81e-8e2eff370b64\") " pod="openshift-image-registry/image-registry-66df7c8f76-2h42r" Dec 15 05:42:32 crc kubenswrapper[4747]: I1215 05:42:32.747136 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/50753f16-a4fa-49bf-b81e-8e2eff370b64-registry-tls\") pod \"image-registry-66df7c8f76-2h42r\" (UID: \"50753f16-a4fa-49bf-b81e-8e2eff370b64\") " pod="openshift-image-registry/image-registry-66df7c8f76-2h42r" Dec 15 05:42:32 crc kubenswrapper[4747]: I1215 05:42:32.747157 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/50753f16-a4fa-49bf-b81e-8e2eff370b64-registry-certificates\") pod \"image-registry-66df7c8f76-2h42r\" (UID: \"50753f16-a4fa-49bf-b81e-8e2eff370b64\") " pod="openshift-image-registry/image-registry-66df7c8f76-2h42r" Dec 15 05:42:32 crc kubenswrapper[4747]: I1215 05:42:32.747181 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/50753f16-a4fa-49bf-b81e-8e2eff370b64-ca-trust-extracted\") pod \"image-registry-66df7c8f76-2h42r\" (UID: \"50753f16-a4fa-49bf-b81e-8e2eff370b64\") " pod="openshift-image-registry/image-registry-66df7c8f76-2h42r" Dec 15 05:42:32 crc kubenswrapper[4747]: I1215 05:42:32.747252 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrttq\" (UniqueName: \"kubernetes.io/projected/50753f16-a4fa-49bf-b81e-8e2eff370b64-kube-api-access-wrttq\") pod \"image-registry-66df7c8f76-2h42r\" (UID: \"50753f16-a4fa-49bf-b81e-8e2eff370b64\") " pod="openshift-image-registry/image-registry-66df7c8f76-2h42r" Dec 15 05:42:32 crc kubenswrapper[4747]: I1215 05:42:32.748055 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/50753f16-a4fa-49bf-b81e-8e2eff370b64-ca-trust-extracted\") pod \"image-registry-66df7c8f76-2h42r\" (UID: \"50753f16-a4fa-49bf-b81e-8e2eff370b64\") " pod="openshift-image-registry/image-registry-66df7c8f76-2h42r" Dec 15 05:42:32 crc kubenswrapper[4747]: I1215 05:42:32.748614 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/50753f16-a4fa-49bf-b81e-8e2eff370b64-trusted-ca\") pod \"image-registry-66df7c8f76-2h42r\" (UID: \"50753f16-a4fa-49bf-b81e-8e2eff370b64\") " pod="openshift-image-registry/image-registry-66df7c8f76-2h42r" Dec 15 05:42:32 crc kubenswrapper[4747]: I1215 05:42:32.748805 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/50753f16-a4fa-49bf-b81e-8e2eff370b64-registry-certificates\") pod \"image-registry-66df7c8f76-2h42r\" (UID: \"50753f16-a4fa-49bf-b81e-8e2eff370b64\") " pod="openshift-image-registry/image-registry-66df7c8f76-2h42r" Dec 15 05:42:32 crc kubenswrapper[4747]: I1215 05:42:32.752705 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/50753f16-a4fa-49bf-b81e-8e2eff370b64-registry-tls\") pod \"image-registry-66df7c8f76-2h42r\" (UID: \"50753f16-a4fa-49bf-b81e-8e2eff370b64\") " pod="openshift-image-registry/image-registry-66df7c8f76-2h42r" Dec 15 05:42:32 crc kubenswrapper[4747]: I1215 05:42:32.753213 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/50753f16-a4fa-49bf-b81e-8e2eff370b64-installation-pull-secrets\") pod \"image-registry-66df7c8f76-2h42r\" (UID: \"50753f16-a4fa-49bf-b81e-8e2eff370b64\") " pod="openshift-image-registry/image-registry-66df7c8f76-2h42r" Dec 15 05:42:32 crc kubenswrapper[4747]: I1215 05:42:32.761997 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrttq\" (UniqueName: \"kubernetes.io/projected/50753f16-a4fa-49bf-b81e-8e2eff370b64-kube-api-access-wrttq\") pod \"image-registry-66df7c8f76-2h42r\" (UID: \"50753f16-a4fa-49bf-b81e-8e2eff370b64\") " pod="openshift-image-registry/image-registry-66df7c8f76-2h42r" Dec 15 05:42:32 crc kubenswrapper[4747]: I1215 05:42:32.762866 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/50753f16-a4fa-49bf-b81e-8e2eff370b64-bound-sa-token\") pod \"image-registry-66df7c8f76-2h42r\" (UID: \"50753f16-a4fa-49bf-b81e-8e2eff370b64\") " pod="openshift-image-registry/image-registry-66df7c8f76-2h42r" Dec 15 05:42:32 crc kubenswrapper[4747]: I1215 05:42:32.863622 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-2h42r" Dec 15 05:42:33 crc kubenswrapper[4747]: I1215 05:42:33.232968 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-2h42r"] Dec 15 05:42:33 crc kubenswrapper[4747]: W1215 05:42:33.237430 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50753f16_a4fa_49bf_b81e_8e2eff370b64.slice/crio-cf9b5d5ab98b8404b02ed8a0cc9714c8d7fd96fbd617497a6706dbb508651c3f WatchSource:0}: Error finding container cf9b5d5ab98b8404b02ed8a0cc9714c8d7fd96fbd617497a6706dbb508651c3f: Status 404 returned error can't find the container with id cf9b5d5ab98b8404b02ed8a0cc9714c8d7fd96fbd617497a6706dbb508651c3f Dec 15 05:42:34 crc kubenswrapper[4747]: I1215 05:42:34.051295 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-2h42r" event={"ID":"50753f16-a4fa-49bf-b81e-8e2eff370b64","Type":"ContainerStarted","Data":"8135f6889b0c66d34d76aedc88adc810a7be0256f8107027762fc2068c3f33db"} Dec 15 05:42:34 crc kubenswrapper[4747]: I1215 05:42:34.051684 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-2h42r" Dec 15 05:42:34 crc kubenswrapper[4747]: I1215 05:42:34.051699 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-2h42r" event={"ID":"50753f16-a4fa-49bf-b81e-8e2eff370b64","Type":"ContainerStarted","Data":"cf9b5d5ab98b8404b02ed8a0cc9714c8d7fd96fbd617497a6706dbb508651c3f"} Dec 15 05:42:34 crc kubenswrapper[4747]: I1215 05:42:34.079148 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-2h42r" podStartSLOduration=2.079124302 podStartE2EDuration="2.079124302s" podCreationTimestamp="2025-12-15 05:42:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:42:34.074513558 +0000 UTC m=+317.771025475" watchObservedRunningTime="2025-12-15 05:42:34.079124302 +0000 UTC m=+317.775636218" Dec 15 05:42:34 crc kubenswrapper[4747]: I1215 05:42:34.622674 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cf466c567-d9fnf"] Dec 15 05:42:34 crc kubenswrapper[4747]: I1215 05:42:34.623078 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-cf466c567-d9fnf" podUID="ce82cd9c-0674-4a4e-8ecf-925dfc855d53" containerName="route-controller-manager" containerID="cri-o://6abcc36ae574e063fe80a16d91f652f97ea2d70ecde740d105a17c7ec8c5d535" gracePeriod=30 Dec 15 05:42:35 crc kubenswrapper[4747]: I1215 05:42:35.037160 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cf466c567-d9fnf" Dec 15 05:42:35 crc kubenswrapper[4747]: I1215 05:42:35.058868 4747 generic.go:334] "Generic (PLEG): container finished" podID="ce82cd9c-0674-4a4e-8ecf-925dfc855d53" containerID="6abcc36ae574e063fe80a16d91f652f97ea2d70ecde740d105a17c7ec8c5d535" exitCode=0 Dec 15 05:42:35 crc kubenswrapper[4747]: I1215 05:42:35.058950 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cf466c567-d9fnf" Dec 15 05:42:35 crc kubenswrapper[4747]: I1215 05:42:35.059021 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cf466c567-d9fnf" event={"ID":"ce82cd9c-0674-4a4e-8ecf-925dfc855d53","Type":"ContainerDied","Data":"6abcc36ae574e063fe80a16d91f652f97ea2d70ecde740d105a17c7ec8c5d535"} Dec 15 05:42:35 crc kubenswrapper[4747]: I1215 05:42:35.059244 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cf466c567-d9fnf" event={"ID":"ce82cd9c-0674-4a4e-8ecf-925dfc855d53","Type":"ContainerDied","Data":"1f008aeae21c8901de2b4ff2d2d8790d8be1f5ef3453b8ad780127244b7562a0"} Dec 15 05:42:35 crc kubenswrapper[4747]: I1215 05:42:35.059281 4747 scope.go:117] "RemoveContainer" containerID="6abcc36ae574e063fe80a16d91f652f97ea2d70ecde740d105a17c7ec8c5d535" Dec 15 05:42:35 crc kubenswrapper[4747]: I1215 05:42:35.077576 4747 scope.go:117] "RemoveContainer" containerID="6abcc36ae574e063fe80a16d91f652f97ea2d70ecde740d105a17c7ec8c5d535" Dec 15 05:42:35 crc kubenswrapper[4747]: E1215 05:42:35.078020 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6abcc36ae574e063fe80a16d91f652f97ea2d70ecde740d105a17c7ec8c5d535\": container with ID starting with 6abcc36ae574e063fe80a16d91f652f97ea2d70ecde740d105a17c7ec8c5d535 not found: ID does not exist" containerID="6abcc36ae574e063fe80a16d91f652f97ea2d70ecde740d105a17c7ec8c5d535" Dec 15 05:42:35 crc kubenswrapper[4747]: I1215 05:42:35.078054 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6abcc36ae574e063fe80a16d91f652f97ea2d70ecde740d105a17c7ec8c5d535"} err="failed to get container status \"6abcc36ae574e063fe80a16d91f652f97ea2d70ecde740d105a17c7ec8c5d535\": rpc error: code = NotFound desc = could not find container \"6abcc36ae574e063fe80a16d91f652f97ea2d70ecde740d105a17c7ec8c5d535\": container with ID starting with 6abcc36ae574e063fe80a16d91f652f97ea2d70ecde740d105a17c7ec8c5d535 not found: ID does not exist" Dec 15 05:42:35 crc kubenswrapper[4747]: I1215 05:42:35.193635 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce82cd9c-0674-4a4e-8ecf-925dfc855d53-client-ca\") pod \"ce82cd9c-0674-4a4e-8ecf-925dfc855d53\" (UID: \"ce82cd9c-0674-4a4e-8ecf-925dfc855d53\") " Dec 15 05:42:35 crc kubenswrapper[4747]: I1215 05:42:35.193735 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce82cd9c-0674-4a4e-8ecf-925dfc855d53-config\") pod \"ce82cd9c-0674-4a4e-8ecf-925dfc855d53\" (UID: \"ce82cd9c-0674-4a4e-8ecf-925dfc855d53\") " Dec 15 05:42:35 crc kubenswrapper[4747]: I1215 05:42:35.193764 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nffpw\" (UniqueName: \"kubernetes.io/projected/ce82cd9c-0674-4a4e-8ecf-925dfc855d53-kube-api-access-nffpw\") pod \"ce82cd9c-0674-4a4e-8ecf-925dfc855d53\" (UID: \"ce82cd9c-0674-4a4e-8ecf-925dfc855d53\") " Dec 15 05:42:35 crc kubenswrapper[4747]: I1215 05:42:35.193882 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce82cd9c-0674-4a4e-8ecf-925dfc855d53-serving-cert\") pod \"ce82cd9c-0674-4a4e-8ecf-925dfc855d53\" (UID: \"ce82cd9c-0674-4a4e-8ecf-925dfc855d53\") " Dec 15 05:42:35 crc kubenswrapper[4747]: I1215 05:42:35.194636 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce82cd9c-0674-4a4e-8ecf-925dfc855d53-client-ca" (OuterVolumeSpecName: "client-ca") pod "ce82cd9c-0674-4a4e-8ecf-925dfc855d53" (UID: "ce82cd9c-0674-4a4e-8ecf-925dfc855d53"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:42:35 crc kubenswrapper[4747]: I1215 05:42:35.194973 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce82cd9c-0674-4a4e-8ecf-925dfc855d53-config" (OuterVolumeSpecName: "config") pod "ce82cd9c-0674-4a4e-8ecf-925dfc855d53" (UID: "ce82cd9c-0674-4a4e-8ecf-925dfc855d53"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:42:35 crc kubenswrapper[4747]: I1215 05:42:35.200403 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce82cd9c-0674-4a4e-8ecf-925dfc855d53-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ce82cd9c-0674-4a4e-8ecf-925dfc855d53" (UID: "ce82cd9c-0674-4a4e-8ecf-925dfc855d53"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:42:35 crc kubenswrapper[4747]: I1215 05:42:35.200490 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce82cd9c-0674-4a4e-8ecf-925dfc855d53-kube-api-access-nffpw" (OuterVolumeSpecName: "kube-api-access-nffpw") pod "ce82cd9c-0674-4a4e-8ecf-925dfc855d53" (UID: "ce82cd9c-0674-4a4e-8ecf-925dfc855d53"). InnerVolumeSpecName "kube-api-access-nffpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:42:35 crc kubenswrapper[4747]: I1215 05:42:35.296019 4747 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce82cd9c-0674-4a4e-8ecf-925dfc855d53-client-ca\") on node \"crc\" DevicePath \"\"" Dec 15 05:42:35 crc kubenswrapper[4747]: I1215 05:42:35.296056 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nffpw\" (UniqueName: \"kubernetes.io/projected/ce82cd9c-0674-4a4e-8ecf-925dfc855d53-kube-api-access-nffpw\") on node \"crc\" DevicePath \"\"" Dec 15 05:42:35 crc kubenswrapper[4747]: I1215 05:42:35.296071 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce82cd9c-0674-4a4e-8ecf-925dfc855d53-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:42:35 crc kubenswrapper[4747]: I1215 05:42:35.296082 4747 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce82cd9c-0674-4a4e-8ecf-925dfc855d53-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 15 05:42:35 crc kubenswrapper[4747]: I1215 05:42:35.382274 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cf466c567-d9fnf"] Dec 15 05:42:35 crc kubenswrapper[4747]: I1215 05:42:35.384555 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cf466c567-d9fnf"] Dec 15 05:42:36 crc kubenswrapper[4747]: I1215 05:42:36.007666 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-695bdc4c5f-5vcvr"] Dec 15 05:42:36 crc kubenswrapper[4747]: E1215 05:42:36.008124 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce82cd9c-0674-4a4e-8ecf-925dfc855d53" containerName="route-controller-manager" Dec 15 05:42:36 crc kubenswrapper[4747]: I1215 05:42:36.008148 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce82cd9c-0674-4a4e-8ecf-925dfc855d53" containerName="route-controller-manager" Dec 15 05:42:36 crc kubenswrapper[4747]: I1215 05:42:36.008249 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce82cd9c-0674-4a4e-8ecf-925dfc855d53" containerName="route-controller-manager" Dec 15 05:42:36 crc kubenswrapper[4747]: I1215 05:42:36.008646 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-695bdc4c5f-5vcvr" Dec 15 05:42:36 crc kubenswrapper[4747]: I1215 05:42:36.010262 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 15 05:42:36 crc kubenswrapper[4747]: I1215 05:42:36.010470 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 15 05:42:36 crc kubenswrapper[4747]: I1215 05:42:36.010511 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 15 05:42:36 crc kubenswrapper[4747]: I1215 05:42:36.010682 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 15 05:42:36 crc kubenswrapper[4747]: I1215 05:42:36.010908 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 15 05:42:36 crc kubenswrapper[4747]: I1215 05:42:36.011256 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 15 05:42:36 crc kubenswrapper[4747]: I1215 05:42:36.016302 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-695bdc4c5f-5vcvr"] Dec 15 05:42:36 crc kubenswrapper[4747]: I1215 05:42:36.108036 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1242f9b0-272a-42ce-a1af-f3b5dbb54fe5-config\") pod \"route-controller-manager-695bdc4c5f-5vcvr\" (UID: \"1242f9b0-272a-42ce-a1af-f3b5dbb54fe5\") " pod="openshift-route-controller-manager/route-controller-manager-695bdc4c5f-5vcvr" Dec 15 05:42:36 crc kubenswrapper[4747]: I1215 05:42:36.108093 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1242f9b0-272a-42ce-a1af-f3b5dbb54fe5-client-ca\") pod \"route-controller-manager-695bdc4c5f-5vcvr\" (UID: \"1242f9b0-272a-42ce-a1af-f3b5dbb54fe5\") " pod="openshift-route-controller-manager/route-controller-manager-695bdc4c5f-5vcvr" Dec 15 05:42:36 crc kubenswrapper[4747]: I1215 05:42:36.108147 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1242f9b0-272a-42ce-a1af-f3b5dbb54fe5-serving-cert\") pod \"route-controller-manager-695bdc4c5f-5vcvr\" (UID: \"1242f9b0-272a-42ce-a1af-f3b5dbb54fe5\") " pod="openshift-route-controller-manager/route-controller-manager-695bdc4c5f-5vcvr" Dec 15 05:42:36 crc kubenswrapper[4747]: I1215 05:42:36.108183 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmsvd\" (UniqueName: \"kubernetes.io/projected/1242f9b0-272a-42ce-a1af-f3b5dbb54fe5-kube-api-access-xmsvd\") pod \"route-controller-manager-695bdc4c5f-5vcvr\" (UID: \"1242f9b0-272a-42ce-a1af-f3b5dbb54fe5\") " pod="openshift-route-controller-manager/route-controller-manager-695bdc4c5f-5vcvr" Dec 15 05:42:36 crc kubenswrapper[4747]: I1215 05:42:36.209729 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1242f9b0-272a-42ce-a1af-f3b5dbb54fe5-serving-cert\") pod \"route-controller-manager-695bdc4c5f-5vcvr\" (UID: \"1242f9b0-272a-42ce-a1af-f3b5dbb54fe5\") " pod="openshift-route-controller-manager/route-controller-manager-695bdc4c5f-5vcvr" Dec 15 05:42:36 crc kubenswrapper[4747]: I1215 05:42:36.209788 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmsvd\" (UniqueName: \"kubernetes.io/projected/1242f9b0-272a-42ce-a1af-f3b5dbb54fe5-kube-api-access-xmsvd\") pod \"route-controller-manager-695bdc4c5f-5vcvr\" (UID: \"1242f9b0-272a-42ce-a1af-f3b5dbb54fe5\") " pod="openshift-route-controller-manager/route-controller-manager-695bdc4c5f-5vcvr" Dec 15 05:42:36 crc kubenswrapper[4747]: I1215 05:42:36.209841 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1242f9b0-272a-42ce-a1af-f3b5dbb54fe5-config\") pod \"route-controller-manager-695bdc4c5f-5vcvr\" (UID: \"1242f9b0-272a-42ce-a1af-f3b5dbb54fe5\") " pod="openshift-route-controller-manager/route-controller-manager-695bdc4c5f-5vcvr" Dec 15 05:42:36 crc kubenswrapper[4747]: I1215 05:42:36.209868 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1242f9b0-272a-42ce-a1af-f3b5dbb54fe5-client-ca\") pod \"route-controller-manager-695bdc4c5f-5vcvr\" (UID: \"1242f9b0-272a-42ce-a1af-f3b5dbb54fe5\") " pod="openshift-route-controller-manager/route-controller-manager-695bdc4c5f-5vcvr" Dec 15 05:42:36 crc kubenswrapper[4747]: I1215 05:42:36.210850 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1242f9b0-272a-42ce-a1af-f3b5dbb54fe5-client-ca\") pod \"route-controller-manager-695bdc4c5f-5vcvr\" (UID: \"1242f9b0-272a-42ce-a1af-f3b5dbb54fe5\") " pod="openshift-route-controller-manager/route-controller-manager-695bdc4c5f-5vcvr" Dec 15 05:42:36 crc kubenswrapper[4747]: I1215 05:42:36.211312 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1242f9b0-272a-42ce-a1af-f3b5dbb54fe5-config\") pod \"route-controller-manager-695bdc4c5f-5vcvr\" (UID: \"1242f9b0-272a-42ce-a1af-f3b5dbb54fe5\") " pod="openshift-route-controller-manager/route-controller-manager-695bdc4c5f-5vcvr" Dec 15 05:42:36 crc kubenswrapper[4747]: I1215 05:42:36.214717 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1242f9b0-272a-42ce-a1af-f3b5dbb54fe5-serving-cert\") pod \"route-controller-manager-695bdc4c5f-5vcvr\" (UID: \"1242f9b0-272a-42ce-a1af-f3b5dbb54fe5\") " pod="openshift-route-controller-manager/route-controller-manager-695bdc4c5f-5vcvr" Dec 15 05:42:36 crc kubenswrapper[4747]: I1215 05:42:36.226599 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmsvd\" (UniqueName: \"kubernetes.io/projected/1242f9b0-272a-42ce-a1af-f3b5dbb54fe5-kube-api-access-xmsvd\") pod \"route-controller-manager-695bdc4c5f-5vcvr\" (UID: \"1242f9b0-272a-42ce-a1af-f3b5dbb54fe5\") " pod="openshift-route-controller-manager/route-controller-manager-695bdc4c5f-5vcvr" Dec 15 05:42:36 crc kubenswrapper[4747]: I1215 05:42:36.321243 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-695bdc4c5f-5vcvr" Dec 15 05:42:36 crc kubenswrapper[4747]: I1215 05:42:36.635523 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce82cd9c-0674-4a4e-8ecf-925dfc855d53" path="/var/lib/kubelet/pods/ce82cd9c-0674-4a4e-8ecf-925dfc855d53/volumes" Dec 15 05:42:36 crc kubenswrapper[4747]: I1215 05:42:36.689868 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-695bdc4c5f-5vcvr"] Dec 15 05:42:36 crc kubenswrapper[4747]: W1215 05:42:36.699403 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1242f9b0_272a_42ce_a1af_f3b5dbb54fe5.slice/crio-a461756346f6ba6ca7d33e67901ef45aa2485ef0c46c16224af4babd2ca4676d WatchSource:0}: Error finding container a461756346f6ba6ca7d33e67901ef45aa2485ef0c46c16224af4babd2ca4676d: Status 404 returned error can't find the container with id a461756346f6ba6ca7d33e67901ef45aa2485ef0c46c16224af4babd2ca4676d Dec 15 05:42:37 crc kubenswrapper[4747]: I1215 05:42:37.072408 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-695bdc4c5f-5vcvr" event={"ID":"1242f9b0-272a-42ce-a1af-f3b5dbb54fe5","Type":"ContainerStarted","Data":"5ab35dbc70e75b6bb3f6e22f8acfb0d70a44d4a9a61ebba1041a13d5f5c8f689"} Dec 15 05:42:37 crc kubenswrapper[4747]: I1215 05:42:37.072796 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-695bdc4c5f-5vcvr" event={"ID":"1242f9b0-272a-42ce-a1af-f3b5dbb54fe5","Type":"ContainerStarted","Data":"a461756346f6ba6ca7d33e67901ef45aa2485ef0c46c16224af4babd2ca4676d"} Dec 15 05:42:37 crc kubenswrapper[4747]: I1215 05:42:37.073275 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-695bdc4c5f-5vcvr" Dec 15 05:42:37 crc kubenswrapper[4747]: I1215 05:42:37.077228 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-695bdc4c5f-5vcvr" Dec 15 05:42:37 crc kubenswrapper[4747]: I1215 05:42:37.095049 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-695bdc4c5f-5vcvr" podStartSLOduration=3.095025705 podStartE2EDuration="3.095025705s" podCreationTimestamp="2025-12-15 05:42:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:42:37.092845879 +0000 UTC m=+320.789357796" watchObservedRunningTime="2025-12-15 05:42:37.095025705 +0000 UTC m=+320.791537621" Dec 15 05:42:52 crc kubenswrapper[4747]: I1215 05:42:52.869196 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-2h42r" Dec 15 05:42:52 crc kubenswrapper[4747]: I1215 05:42:52.916366 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lzg4l"] Dec 15 05:42:54 crc kubenswrapper[4747]: I1215 05:42:54.635989 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7fdc9849d6-4gg57"] Dec 15 05:42:54 crc kubenswrapper[4747]: I1215 05:42:54.636572 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7fdc9849d6-4gg57" podUID="9408704c-13de-4028-aec6-d8db878ae765" containerName="controller-manager" containerID="cri-o://972877489f18bbb9c8230ea8b33806f8b3e7c84bccd3955788f2ba578dd59455" gracePeriod=30 Dec 15 05:42:55 crc kubenswrapper[4747]: I1215 05:42:55.019653 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fdc9849d6-4gg57" Dec 15 05:42:55 crc kubenswrapper[4747]: I1215 05:42:55.143905 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvh7n\" (UniqueName: \"kubernetes.io/projected/9408704c-13de-4028-aec6-d8db878ae765-kube-api-access-gvh7n\") pod \"9408704c-13de-4028-aec6-d8db878ae765\" (UID: \"9408704c-13de-4028-aec6-d8db878ae765\") " Dec 15 05:42:55 crc kubenswrapper[4747]: I1215 05:42:55.144019 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9408704c-13de-4028-aec6-d8db878ae765-client-ca\") pod \"9408704c-13de-4028-aec6-d8db878ae765\" (UID: \"9408704c-13de-4028-aec6-d8db878ae765\") " Dec 15 05:42:55 crc kubenswrapper[4747]: I1215 05:42:55.144057 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9408704c-13de-4028-aec6-d8db878ae765-proxy-ca-bundles\") pod \"9408704c-13de-4028-aec6-d8db878ae765\" (UID: \"9408704c-13de-4028-aec6-d8db878ae765\") " Dec 15 05:42:55 crc kubenswrapper[4747]: I1215 05:42:55.144155 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9408704c-13de-4028-aec6-d8db878ae765-serving-cert\") pod \"9408704c-13de-4028-aec6-d8db878ae765\" (UID: \"9408704c-13de-4028-aec6-d8db878ae765\") " Dec 15 05:42:55 crc kubenswrapper[4747]: I1215 05:42:55.144178 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9408704c-13de-4028-aec6-d8db878ae765-config\") pod \"9408704c-13de-4028-aec6-d8db878ae765\" (UID: \"9408704c-13de-4028-aec6-d8db878ae765\") " Dec 15 05:42:55 crc kubenswrapper[4747]: I1215 05:42:55.145160 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9408704c-13de-4028-aec6-d8db878ae765-client-ca" (OuterVolumeSpecName: "client-ca") pod "9408704c-13de-4028-aec6-d8db878ae765" (UID: "9408704c-13de-4028-aec6-d8db878ae765"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:42:55 crc kubenswrapper[4747]: I1215 05:42:55.145218 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9408704c-13de-4028-aec6-d8db878ae765-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9408704c-13de-4028-aec6-d8db878ae765" (UID: "9408704c-13de-4028-aec6-d8db878ae765"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:42:55 crc kubenswrapper[4747]: I1215 05:42:55.145263 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9408704c-13de-4028-aec6-d8db878ae765-config" (OuterVolumeSpecName: "config") pod "9408704c-13de-4028-aec6-d8db878ae765" (UID: "9408704c-13de-4028-aec6-d8db878ae765"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:42:55 crc kubenswrapper[4747]: I1215 05:42:55.151070 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9408704c-13de-4028-aec6-d8db878ae765-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9408704c-13de-4028-aec6-d8db878ae765" (UID: "9408704c-13de-4028-aec6-d8db878ae765"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:42:55 crc kubenswrapper[4747]: I1215 05:42:55.151222 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9408704c-13de-4028-aec6-d8db878ae765-kube-api-access-gvh7n" (OuterVolumeSpecName: "kube-api-access-gvh7n") pod "9408704c-13de-4028-aec6-d8db878ae765" (UID: "9408704c-13de-4028-aec6-d8db878ae765"). InnerVolumeSpecName "kube-api-access-gvh7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:42:55 crc kubenswrapper[4747]: I1215 05:42:55.173382 4747 generic.go:334] "Generic (PLEG): container finished" podID="9408704c-13de-4028-aec6-d8db878ae765" containerID="972877489f18bbb9c8230ea8b33806f8b3e7c84bccd3955788f2ba578dd59455" exitCode=0 Dec 15 05:42:55 crc kubenswrapper[4747]: I1215 05:42:55.173428 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fdc9849d6-4gg57" Dec 15 05:42:55 crc kubenswrapper[4747]: I1215 05:42:55.173431 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7fdc9849d6-4gg57" event={"ID":"9408704c-13de-4028-aec6-d8db878ae765","Type":"ContainerDied","Data":"972877489f18bbb9c8230ea8b33806f8b3e7c84bccd3955788f2ba578dd59455"} Dec 15 05:42:55 crc kubenswrapper[4747]: I1215 05:42:55.173483 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7fdc9849d6-4gg57" event={"ID":"9408704c-13de-4028-aec6-d8db878ae765","Type":"ContainerDied","Data":"784747b25d353ef3b8e3ee46f5a2f8df8b8d7ac288c77cd724f9a1784a66f19d"} Dec 15 05:42:55 crc kubenswrapper[4747]: I1215 05:42:55.173502 4747 scope.go:117] "RemoveContainer" containerID="972877489f18bbb9c8230ea8b33806f8b3e7c84bccd3955788f2ba578dd59455" Dec 15 05:42:55 crc kubenswrapper[4747]: I1215 05:42:55.189357 4747 scope.go:117] "RemoveContainer" containerID="972877489f18bbb9c8230ea8b33806f8b3e7c84bccd3955788f2ba578dd59455" Dec 15 05:42:55 crc kubenswrapper[4747]: E1215 05:42:55.189879 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"972877489f18bbb9c8230ea8b33806f8b3e7c84bccd3955788f2ba578dd59455\": container with ID starting with 972877489f18bbb9c8230ea8b33806f8b3e7c84bccd3955788f2ba578dd59455 not found: ID does not exist" containerID="972877489f18bbb9c8230ea8b33806f8b3e7c84bccd3955788f2ba578dd59455" Dec 15 05:42:55 crc kubenswrapper[4747]: I1215 05:42:55.189942 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"972877489f18bbb9c8230ea8b33806f8b3e7c84bccd3955788f2ba578dd59455"} err="failed to get container status \"972877489f18bbb9c8230ea8b33806f8b3e7c84bccd3955788f2ba578dd59455\": rpc error: code = NotFound desc = could not find container \"972877489f18bbb9c8230ea8b33806f8b3e7c84bccd3955788f2ba578dd59455\": container with ID starting with 972877489f18bbb9c8230ea8b33806f8b3e7c84bccd3955788f2ba578dd59455 not found: ID does not exist" Dec 15 05:42:55 crc kubenswrapper[4747]: I1215 05:42:55.206549 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7fdc9849d6-4gg57"] Dec 15 05:42:55 crc kubenswrapper[4747]: I1215 05:42:55.208703 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7fdc9849d6-4gg57"] Dec 15 05:42:55 crc kubenswrapper[4747]: I1215 05:42:55.246459 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvh7n\" (UniqueName: \"kubernetes.io/projected/9408704c-13de-4028-aec6-d8db878ae765-kube-api-access-gvh7n\") on node \"crc\" DevicePath \"\"" Dec 15 05:42:55 crc kubenswrapper[4747]: I1215 05:42:55.246497 4747 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9408704c-13de-4028-aec6-d8db878ae765-client-ca\") on node \"crc\" DevicePath \"\"" Dec 15 05:42:55 crc kubenswrapper[4747]: I1215 05:42:55.246510 4747 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9408704c-13de-4028-aec6-d8db878ae765-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 15 05:42:55 crc kubenswrapper[4747]: I1215 05:42:55.246521 4747 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9408704c-13de-4028-aec6-d8db878ae765-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 15 05:42:55 crc kubenswrapper[4747]: I1215 05:42:55.246529 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9408704c-13de-4028-aec6-d8db878ae765-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:42:56 crc kubenswrapper[4747]: I1215 05:42:56.020775 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5665486c86-shh7m"] Dec 15 05:42:56 crc kubenswrapper[4747]: E1215 05:42:56.021003 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9408704c-13de-4028-aec6-d8db878ae765" containerName="controller-manager" Dec 15 05:42:56 crc kubenswrapper[4747]: I1215 05:42:56.021018 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="9408704c-13de-4028-aec6-d8db878ae765" containerName="controller-manager" Dec 15 05:42:56 crc kubenswrapper[4747]: I1215 05:42:56.021103 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="9408704c-13de-4028-aec6-d8db878ae765" containerName="controller-manager" Dec 15 05:42:56 crc kubenswrapper[4747]: I1215 05:42:56.021507 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5665486c86-shh7m" Dec 15 05:42:56 crc kubenswrapper[4747]: I1215 05:42:56.023649 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 15 05:42:56 crc kubenswrapper[4747]: I1215 05:42:56.023694 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 15 05:42:56 crc kubenswrapper[4747]: I1215 05:42:56.023732 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 15 05:42:56 crc kubenswrapper[4747]: I1215 05:42:56.023658 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 15 05:42:56 crc kubenswrapper[4747]: I1215 05:42:56.023959 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 15 05:42:56 crc kubenswrapper[4747]: I1215 05:42:56.024327 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 15 05:42:56 crc kubenswrapper[4747]: I1215 05:42:56.029254 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 15 05:42:56 crc kubenswrapper[4747]: I1215 05:42:56.030742 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5665486c86-shh7m"] Dec 15 05:42:56 crc kubenswrapper[4747]: I1215 05:42:56.157688 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d68f87e8-662d-4805-9e40-a515820dc05d-proxy-ca-bundles\") pod \"controller-manager-5665486c86-shh7m\" (UID: \"d68f87e8-662d-4805-9e40-a515820dc05d\") " pod="openshift-controller-manager/controller-manager-5665486c86-shh7m" Dec 15 05:42:56 crc kubenswrapper[4747]: I1215 05:42:56.157730 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d68f87e8-662d-4805-9e40-a515820dc05d-serving-cert\") pod \"controller-manager-5665486c86-shh7m\" (UID: \"d68f87e8-662d-4805-9e40-a515820dc05d\") " pod="openshift-controller-manager/controller-manager-5665486c86-shh7m" Dec 15 05:42:56 crc kubenswrapper[4747]: I1215 05:42:56.157762 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm4gm\" (UniqueName: \"kubernetes.io/projected/d68f87e8-662d-4805-9e40-a515820dc05d-kube-api-access-wm4gm\") pod \"controller-manager-5665486c86-shh7m\" (UID: \"d68f87e8-662d-4805-9e40-a515820dc05d\") " pod="openshift-controller-manager/controller-manager-5665486c86-shh7m" Dec 15 05:42:56 crc kubenswrapper[4747]: I1215 05:42:56.157795 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d68f87e8-662d-4805-9e40-a515820dc05d-config\") pod \"controller-manager-5665486c86-shh7m\" (UID: \"d68f87e8-662d-4805-9e40-a515820dc05d\") " pod="openshift-controller-manager/controller-manager-5665486c86-shh7m" Dec 15 05:42:56 crc kubenswrapper[4747]: I1215 05:42:56.158217 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d68f87e8-662d-4805-9e40-a515820dc05d-client-ca\") pod \"controller-manager-5665486c86-shh7m\" (UID: \"d68f87e8-662d-4805-9e40-a515820dc05d\") " pod="openshift-controller-manager/controller-manager-5665486c86-shh7m" Dec 15 05:42:56 crc kubenswrapper[4747]: I1215 05:42:56.259098 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d68f87e8-662d-4805-9e40-a515820dc05d-proxy-ca-bundles\") pod \"controller-manager-5665486c86-shh7m\" (UID: \"d68f87e8-662d-4805-9e40-a515820dc05d\") " pod="openshift-controller-manager/controller-manager-5665486c86-shh7m" Dec 15 05:42:56 crc kubenswrapper[4747]: I1215 05:42:56.259140 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d68f87e8-662d-4805-9e40-a515820dc05d-serving-cert\") pod \"controller-manager-5665486c86-shh7m\" (UID: \"d68f87e8-662d-4805-9e40-a515820dc05d\") " pod="openshift-controller-manager/controller-manager-5665486c86-shh7m" Dec 15 05:42:56 crc kubenswrapper[4747]: I1215 05:42:56.259167 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm4gm\" (UniqueName: \"kubernetes.io/projected/d68f87e8-662d-4805-9e40-a515820dc05d-kube-api-access-wm4gm\") pod \"controller-manager-5665486c86-shh7m\" (UID: \"d68f87e8-662d-4805-9e40-a515820dc05d\") " pod="openshift-controller-manager/controller-manager-5665486c86-shh7m" Dec 15 05:42:56 crc kubenswrapper[4747]: I1215 05:42:56.259196 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d68f87e8-662d-4805-9e40-a515820dc05d-config\") pod \"controller-manager-5665486c86-shh7m\" (UID: \"d68f87e8-662d-4805-9e40-a515820dc05d\") " pod="openshift-controller-manager/controller-manager-5665486c86-shh7m" Dec 15 05:42:56 crc kubenswrapper[4747]: I1215 05:42:56.259245 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d68f87e8-662d-4805-9e40-a515820dc05d-client-ca\") pod \"controller-manager-5665486c86-shh7m\" (UID: \"d68f87e8-662d-4805-9e40-a515820dc05d\") " pod="openshift-controller-manager/controller-manager-5665486c86-shh7m" Dec 15 05:42:56 crc kubenswrapper[4747]: I1215 05:42:56.260063 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d68f87e8-662d-4805-9e40-a515820dc05d-client-ca\") pod \"controller-manager-5665486c86-shh7m\" (UID: \"d68f87e8-662d-4805-9e40-a515820dc05d\") " pod="openshift-controller-manager/controller-manager-5665486c86-shh7m" Dec 15 05:42:56 crc kubenswrapper[4747]: I1215 05:42:56.260207 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d68f87e8-662d-4805-9e40-a515820dc05d-proxy-ca-bundles\") pod \"controller-manager-5665486c86-shh7m\" (UID: \"d68f87e8-662d-4805-9e40-a515820dc05d\") " pod="openshift-controller-manager/controller-manager-5665486c86-shh7m" Dec 15 05:42:56 crc kubenswrapper[4747]: I1215 05:42:56.260449 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d68f87e8-662d-4805-9e40-a515820dc05d-config\") pod \"controller-manager-5665486c86-shh7m\" (UID: \"d68f87e8-662d-4805-9e40-a515820dc05d\") " pod="openshift-controller-manager/controller-manager-5665486c86-shh7m" Dec 15 05:42:56 crc kubenswrapper[4747]: I1215 05:42:56.263562 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d68f87e8-662d-4805-9e40-a515820dc05d-serving-cert\") pod \"controller-manager-5665486c86-shh7m\" (UID: \"d68f87e8-662d-4805-9e40-a515820dc05d\") " pod="openshift-controller-manager/controller-manager-5665486c86-shh7m" Dec 15 05:42:56 crc kubenswrapper[4747]: I1215 05:42:56.273287 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm4gm\" (UniqueName: \"kubernetes.io/projected/d68f87e8-662d-4805-9e40-a515820dc05d-kube-api-access-wm4gm\") pod \"controller-manager-5665486c86-shh7m\" (UID: \"d68f87e8-662d-4805-9e40-a515820dc05d\") " pod="openshift-controller-manager/controller-manager-5665486c86-shh7m" Dec 15 05:42:56 crc kubenswrapper[4747]: I1215 05:42:56.333350 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5665486c86-shh7m" Dec 15 05:42:56 crc kubenswrapper[4747]: I1215 05:42:56.639374 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9408704c-13de-4028-aec6-d8db878ae765" path="/var/lib/kubelet/pods/9408704c-13de-4028-aec6-d8db878ae765/volumes" Dec 15 05:42:56 crc kubenswrapper[4747]: I1215 05:42:56.704193 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5665486c86-shh7m"] Dec 15 05:42:57 crc kubenswrapper[4747]: I1215 05:42:57.193385 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5665486c86-shh7m" event={"ID":"d68f87e8-662d-4805-9e40-a515820dc05d","Type":"ContainerStarted","Data":"7c282ccb6fbffcb0e47faae80ec4b3f3779f6b23fa35d1ba8871c7377138da19"} Dec 15 05:42:57 crc kubenswrapper[4747]: I1215 05:42:57.193437 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5665486c86-shh7m" event={"ID":"d68f87e8-662d-4805-9e40-a515820dc05d","Type":"ContainerStarted","Data":"25932f976280a497fddf4e40ea6b94e4bd62b08d7614bd285d55e8f5f484b0ab"} Dec 15 05:42:57 crc kubenswrapper[4747]: I1215 05:42:57.193689 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5665486c86-shh7m" Dec 15 05:42:57 crc kubenswrapper[4747]: I1215 05:42:57.197697 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5665486c86-shh7m" Dec 15 05:42:57 crc kubenswrapper[4747]: I1215 05:42:57.212125 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5665486c86-shh7m" podStartSLOduration=3.212108714 podStartE2EDuration="3.212108714s" podCreationTimestamp="2025-12-15 05:42:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:42:57.210265303 +0000 UTC m=+340.906777220" watchObservedRunningTime="2025-12-15 05:42:57.212108714 +0000 UTC m=+340.908620631" Dec 15 05:43:17 crc kubenswrapper[4747]: I1215 05:43:17.947723 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" podUID="db7a7a97-4354-4b54-afbc-e47fb8751316" containerName="registry" containerID="cri-o://ef9f48e4c4a24b96cc01c36f2c265127ef3bdfe7596733449be856abe5564602" gracePeriod=30 Dec 15 05:43:18 crc kubenswrapper[4747]: I1215 05:43:18.314409 4747 generic.go:334] "Generic (PLEG): container finished" podID="db7a7a97-4354-4b54-afbc-e47fb8751316" containerID="ef9f48e4c4a24b96cc01c36f2c265127ef3bdfe7596733449be856abe5564602" exitCode=0 Dec 15 05:43:18 crc kubenswrapper[4747]: I1215 05:43:18.314497 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" event={"ID":"db7a7a97-4354-4b54-afbc-e47fb8751316","Type":"ContainerDied","Data":"ef9f48e4c4a24b96cc01c36f2c265127ef3bdfe7596733449be856abe5564602"} Dec 15 05:43:18 crc kubenswrapper[4747]: I1215 05:43:18.345297 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:43:18 crc kubenswrapper[4747]: I1215 05:43:18.438387 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/db7a7a97-4354-4b54-afbc-e47fb8751316-bound-sa-token\") pod \"db7a7a97-4354-4b54-afbc-e47fb8751316\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " Dec 15 05:43:18 crc kubenswrapper[4747]: I1215 05:43:18.440832 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"db7a7a97-4354-4b54-afbc-e47fb8751316\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " Dec 15 05:43:18 crc kubenswrapper[4747]: I1215 05:43:18.441027 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/db7a7a97-4354-4b54-afbc-e47fb8751316-registry-certificates\") pod \"db7a7a97-4354-4b54-afbc-e47fb8751316\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " Dec 15 05:43:18 crc kubenswrapper[4747]: I1215 05:43:18.441064 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db7a7a97-4354-4b54-afbc-e47fb8751316-trusted-ca\") pod \"db7a7a97-4354-4b54-afbc-e47fb8751316\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " Dec 15 05:43:18 crc kubenswrapper[4747]: I1215 05:43:18.441144 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/db7a7a97-4354-4b54-afbc-e47fb8751316-ca-trust-extracted\") pod \"db7a7a97-4354-4b54-afbc-e47fb8751316\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " Dec 15 05:43:18 crc kubenswrapper[4747]: I1215 05:43:18.441394 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/db7a7a97-4354-4b54-afbc-e47fb8751316-registry-tls\") pod \"db7a7a97-4354-4b54-afbc-e47fb8751316\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " Dec 15 05:43:18 crc kubenswrapper[4747]: I1215 05:43:18.441439 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/db7a7a97-4354-4b54-afbc-e47fb8751316-installation-pull-secrets\") pod \"db7a7a97-4354-4b54-afbc-e47fb8751316\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " Dec 15 05:43:18 crc kubenswrapper[4747]: I1215 05:43:18.441468 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d65z8\" (UniqueName: \"kubernetes.io/projected/db7a7a97-4354-4b54-afbc-e47fb8751316-kube-api-access-d65z8\") pod \"db7a7a97-4354-4b54-afbc-e47fb8751316\" (UID: \"db7a7a97-4354-4b54-afbc-e47fb8751316\") " Dec 15 05:43:18 crc kubenswrapper[4747]: I1215 05:43:18.442067 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db7a7a97-4354-4b54-afbc-e47fb8751316-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "db7a7a97-4354-4b54-afbc-e47fb8751316" (UID: "db7a7a97-4354-4b54-afbc-e47fb8751316"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:43:18 crc kubenswrapper[4747]: I1215 05:43:18.442156 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db7a7a97-4354-4b54-afbc-e47fb8751316-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "db7a7a97-4354-4b54-afbc-e47fb8751316" (UID: "db7a7a97-4354-4b54-afbc-e47fb8751316"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:43:18 crc kubenswrapper[4747]: I1215 05:43:18.449383 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "db7a7a97-4354-4b54-afbc-e47fb8751316" (UID: "db7a7a97-4354-4b54-afbc-e47fb8751316"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 15 05:43:18 crc kubenswrapper[4747]: I1215 05:43:18.449409 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db7a7a97-4354-4b54-afbc-e47fb8751316-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "db7a7a97-4354-4b54-afbc-e47fb8751316" (UID: "db7a7a97-4354-4b54-afbc-e47fb8751316"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:43:18 crc kubenswrapper[4747]: I1215 05:43:18.449579 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db7a7a97-4354-4b54-afbc-e47fb8751316-kube-api-access-d65z8" (OuterVolumeSpecName: "kube-api-access-d65z8") pod "db7a7a97-4354-4b54-afbc-e47fb8751316" (UID: "db7a7a97-4354-4b54-afbc-e47fb8751316"). InnerVolumeSpecName "kube-api-access-d65z8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:43:18 crc kubenswrapper[4747]: I1215 05:43:18.449664 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db7a7a97-4354-4b54-afbc-e47fb8751316-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "db7a7a97-4354-4b54-afbc-e47fb8751316" (UID: "db7a7a97-4354-4b54-afbc-e47fb8751316"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:43:18 crc kubenswrapper[4747]: I1215 05:43:18.449835 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db7a7a97-4354-4b54-afbc-e47fb8751316-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "db7a7a97-4354-4b54-afbc-e47fb8751316" (UID: "db7a7a97-4354-4b54-afbc-e47fb8751316"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:43:18 crc kubenswrapper[4747]: I1215 05:43:18.455237 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db7a7a97-4354-4b54-afbc-e47fb8751316-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "db7a7a97-4354-4b54-afbc-e47fb8751316" (UID: "db7a7a97-4354-4b54-afbc-e47fb8751316"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:43:18 crc kubenswrapper[4747]: I1215 05:43:18.543580 4747 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/db7a7a97-4354-4b54-afbc-e47fb8751316-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 15 05:43:18 crc kubenswrapper[4747]: I1215 05:43:18.543611 4747 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db7a7a97-4354-4b54-afbc-e47fb8751316-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 15 05:43:18 crc kubenswrapper[4747]: I1215 05:43:18.543625 4747 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/db7a7a97-4354-4b54-afbc-e47fb8751316-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 15 05:43:18 crc kubenswrapper[4747]: I1215 05:43:18.543634 4747 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/db7a7a97-4354-4b54-afbc-e47fb8751316-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 15 05:43:18 crc kubenswrapper[4747]: I1215 05:43:18.543642 4747 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/db7a7a97-4354-4b54-afbc-e47fb8751316-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 15 05:43:18 crc kubenswrapper[4747]: I1215 05:43:18.543652 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d65z8\" (UniqueName: \"kubernetes.io/projected/db7a7a97-4354-4b54-afbc-e47fb8751316-kube-api-access-d65z8\") on node \"crc\" DevicePath \"\"" Dec 15 05:43:18 crc kubenswrapper[4747]: I1215 05:43:18.543660 4747 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/db7a7a97-4354-4b54-afbc-e47fb8751316-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 15 05:43:19 crc kubenswrapper[4747]: I1215 05:43:19.321627 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" event={"ID":"db7a7a97-4354-4b54-afbc-e47fb8751316","Type":"ContainerDied","Data":"d73ac43fa234248b2fcbc1bce861d8ca58ace7d5437852eec4e192990d8f7b90"} Dec 15 05:43:19 crc kubenswrapper[4747]: I1215 05:43:19.321698 4747 scope.go:117] "RemoveContainer" containerID="ef9f48e4c4a24b96cc01c36f2c265127ef3bdfe7596733449be856abe5564602" Dec 15 05:43:19 crc kubenswrapper[4747]: I1215 05:43:19.321711 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lzg4l" Dec 15 05:43:19 crc kubenswrapper[4747]: I1215 05:43:19.339831 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lzg4l"] Dec 15 05:43:19 crc kubenswrapper[4747]: I1215 05:43:19.345002 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lzg4l"] Dec 15 05:43:20 crc kubenswrapper[4747]: I1215 05:43:20.636046 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db7a7a97-4354-4b54-afbc-e47fb8751316" path="/var/lib/kubelet/pods/db7a7a97-4354-4b54-afbc-e47fb8751316/volumes" Dec 15 05:43:28 crc kubenswrapper[4747]: I1215 05:43:28.864995 4747 patch_prober.go:28] interesting pod/machine-config-daemon-nldtn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 15 05:43:28 crc kubenswrapper[4747]: I1215 05:43:28.865075 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 15 05:43:58 crc kubenswrapper[4747]: I1215 05:43:58.865763 4747 patch_prober.go:28] interesting pod/machine-config-daemon-nldtn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 15 05:43:58 crc kubenswrapper[4747]: I1215 05:43:58.867086 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 15 05:44:28 crc kubenswrapper[4747]: I1215 05:44:28.866011 4747 patch_prober.go:28] interesting pod/machine-config-daemon-nldtn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 15 05:44:28 crc kubenswrapper[4747]: I1215 05:44:28.866627 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 15 05:44:28 crc kubenswrapper[4747]: I1215 05:44:28.866682 4747 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" Dec 15 05:44:28 crc kubenswrapper[4747]: I1215 05:44:28.867382 4747 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"69403043616ef8b443997fe2ec8a367f1ef1de28024e4cb945e644c4878527e7"} pod="openshift-machine-config-operator/machine-config-daemon-nldtn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 15 05:44:28 crc kubenswrapper[4747]: I1215 05:44:28.867447 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" containerID="cri-o://69403043616ef8b443997fe2ec8a367f1ef1de28024e4cb945e644c4878527e7" gracePeriod=600 Dec 15 05:44:29 crc kubenswrapper[4747]: I1215 05:44:29.707447 4747 generic.go:334] "Generic (PLEG): container finished" podID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerID="69403043616ef8b443997fe2ec8a367f1ef1de28024e4cb945e644c4878527e7" exitCode=0 Dec 15 05:44:29 crc kubenswrapper[4747]: I1215 05:44:29.707539 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" event={"ID":"1d50e5c9-7ce9-40c0-b942-01031654d27c","Type":"ContainerDied","Data":"69403043616ef8b443997fe2ec8a367f1ef1de28024e4cb945e644c4878527e7"} Dec 15 05:44:29 crc kubenswrapper[4747]: I1215 05:44:29.707903 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" event={"ID":"1d50e5c9-7ce9-40c0-b942-01031654d27c","Type":"ContainerStarted","Data":"a17943ccf4995eb4ff240ba732355ee9e9020e929a2275df58776bf83d66a3b3"} Dec 15 05:44:29 crc kubenswrapper[4747]: I1215 05:44:29.707959 4747 scope.go:117] "RemoveContainer" containerID="d8e211d6177b24044383a0cc22cceb80f6442489a5d2b4adbafc3d36637b3a96" Dec 15 05:45:00 crc kubenswrapper[4747]: I1215 05:45:00.154967 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29429625-64ffh"] Dec 15 05:45:00 crc kubenswrapper[4747]: E1215 05:45:00.155873 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db7a7a97-4354-4b54-afbc-e47fb8751316" containerName="registry" Dec 15 05:45:00 crc kubenswrapper[4747]: I1215 05:45:00.155890 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="db7a7a97-4354-4b54-afbc-e47fb8751316" containerName="registry" Dec 15 05:45:00 crc kubenswrapper[4747]: I1215 05:45:00.156086 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="db7a7a97-4354-4b54-afbc-e47fb8751316" containerName="registry" Dec 15 05:45:00 crc kubenswrapper[4747]: I1215 05:45:00.156757 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29429625-64ffh" Dec 15 05:45:00 crc kubenswrapper[4747]: I1215 05:45:00.158561 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 15 05:45:00 crc kubenswrapper[4747]: I1215 05:45:00.160154 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 15 05:45:00 crc kubenswrapper[4747]: I1215 05:45:00.167665 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29429625-64ffh"] Dec 15 05:45:00 crc kubenswrapper[4747]: I1215 05:45:00.343554 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgk64\" (UniqueName: \"kubernetes.io/projected/3dab2e78-e203-4f2e-9b13-a42f800038f2-kube-api-access-pgk64\") pod \"collect-profiles-29429625-64ffh\" (UID: \"3dab2e78-e203-4f2e-9b13-a42f800038f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29429625-64ffh" Dec 15 05:45:00 crc kubenswrapper[4747]: I1215 05:45:00.343627 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3dab2e78-e203-4f2e-9b13-a42f800038f2-secret-volume\") pod \"collect-profiles-29429625-64ffh\" (UID: \"3dab2e78-e203-4f2e-9b13-a42f800038f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29429625-64ffh" Dec 15 05:45:00 crc kubenswrapper[4747]: I1215 05:45:00.343647 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3dab2e78-e203-4f2e-9b13-a42f800038f2-config-volume\") pod \"collect-profiles-29429625-64ffh\" (UID: \"3dab2e78-e203-4f2e-9b13-a42f800038f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29429625-64ffh" Dec 15 05:45:00 crc kubenswrapper[4747]: I1215 05:45:00.444523 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgk64\" (UniqueName: \"kubernetes.io/projected/3dab2e78-e203-4f2e-9b13-a42f800038f2-kube-api-access-pgk64\") pod \"collect-profiles-29429625-64ffh\" (UID: \"3dab2e78-e203-4f2e-9b13-a42f800038f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29429625-64ffh" Dec 15 05:45:00 crc kubenswrapper[4747]: I1215 05:45:00.444611 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3dab2e78-e203-4f2e-9b13-a42f800038f2-secret-volume\") pod \"collect-profiles-29429625-64ffh\" (UID: \"3dab2e78-e203-4f2e-9b13-a42f800038f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29429625-64ffh" Dec 15 05:45:00 crc kubenswrapper[4747]: I1215 05:45:00.444637 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3dab2e78-e203-4f2e-9b13-a42f800038f2-config-volume\") pod \"collect-profiles-29429625-64ffh\" (UID: \"3dab2e78-e203-4f2e-9b13-a42f800038f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29429625-64ffh" Dec 15 05:45:00 crc kubenswrapper[4747]: I1215 05:45:00.445714 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3dab2e78-e203-4f2e-9b13-a42f800038f2-config-volume\") pod \"collect-profiles-29429625-64ffh\" (UID: \"3dab2e78-e203-4f2e-9b13-a42f800038f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29429625-64ffh" Dec 15 05:45:00 crc kubenswrapper[4747]: I1215 05:45:00.452166 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3dab2e78-e203-4f2e-9b13-a42f800038f2-secret-volume\") pod \"collect-profiles-29429625-64ffh\" (UID: \"3dab2e78-e203-4f2e-9b13-a42f800038f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29429625-64ffh" Dec 15 05:45:00 crc kubenswrapper[4747]: I1215 05:45:00.459717 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgk64\" (UniqueName: \"kubernetes.io/projected/3dab2e78-e203-4f2e-9b13-a42f800038f2-kube-api-access-pgk64\") pod \"collect-profiles-29429625-64ffh\" (UID: \"3dab2e78-e203-4f2e-9b13-a42f800038f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29429625-64ffh" Dec 15 05:45:00 crc kubenswrapper[4747]: I1215 05:45:00.473475 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29429625-64ffh" Dec 15 05:45:00 crc kubenswrapper[4747]: I1215 05:45:00.829802 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29429625-64ffh"] Dec 15 05:45:00 crc kubenswrapper[4747]: I1215 05:45:00.878090 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29429625-64ffh" event={"ID":"3dab2e78-e203-4f2e-9b13-a42f800038f2","Type":"ContainerStarted","Data":"011a252c8ea6ec920503b09c175f6966744df1def79f33a6900c968dbb7833f4"} Dec 15 05:45:01 crc kubenswrapper[4747]: I1215 05:45:01.884893 4747 generic.go:334] "Generic (PLEG): container finished" podID="3dab2e78-e203-4f2e-9b13-a42f800038f2" containerID="249493c95bd35a370396abe1211fa33a94d8774a9c8b72abf99d7c8f9cd4aa14" exitCode=0 Dec 15 05:45:01 crc kubenswrapper[4747]: I1215 05:45:01.885031 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29429625-64ffh" event={"ID":"3dab2e78-e203-4f2e-9b13-a42f800038f2","Type":"ContainerDied","Data":"249493c95bd35a370396abe1211fa33a94d8774a9c8b72abf99d7c8f9cd4aa14"} Dec 15 05:45:03 crc kubenswrapper[4747]: I1215 05:45:03.083242 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29429625-64ffh" Dec 15 05:45:03 crc kubenswrapper[4747]: I1215 05:45:03.278535 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgk64\" (UniqueName: \"kubernetes.io/projected/3dab2e78-e203-4f2e-9b13-a42f800038f2-kube-api-access-pgk64\") pod \"3dab2e78-e203-4f2e-9b13-a42f800038f2\" (UID: \"3dab2e78-e203-4f2e-9b13-a42f800038f2\") " Dec 15 05:45:03 crc kubenswrapper[4747]: I1215 05:45:03.278652 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3dab2e78-e203-4f2e-9b13-a42f800038f2-secret-volume\") pod \"3dab2e78-e203-4f2e-9b13-a42f800038f2\" (UID: \"3dab2e78-e203-4f2e-9b13-a42f800038f2\") " Dec 15 05:45:03 crc kubenswrapper[4747]: I1215 05:45:03.278741 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3dab2e78-e203-4f2e-9b13-a42f800038f2-config-volume\") pod \"3dab2e78-e203-4f2e-9b13-a42f800038f2\" (UID: \"3dab2e78-e203-4f2e-9b13-a42f800038f2\") " Dec 15 05:45:03 crc kubenswrapper[4747]: I1215 05:45:03.279464 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3dab2e78-e203-4f2e-9b13-a42f800038f2-config-volume" (OuterVolumeSpecName: "config-volume") pod "3dab2e78-e203-4f2e-9b13-a42f800038f2" (UID: "3dab2e78-e203-4f2e-9b13-a42f800038f2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:45:03 crc kubenswrapper[4747]: I1215 05:45:03.285264 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dab2e78-e203-4f2e-9b13-a42f800038f2-kube-api-access-pgk64" (OuterVolumeSpecName: "kube-api-access-pgk64") pod "3dab2e78-e203-4f2e-9b13-a42f800038f2" (UID: "3dab2e78-e203-4f2e-9b13-a42f800038f2"). InnerVolumeSpecName "kube-api-access-pgk64". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:45:03 crc kubenswrapper[4747]: I1215 05:45:03.285397 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dab2e78-e203-4f2e-9b13-a42f800038f2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3dab2e78-e203-4f2e-9b13-a42f800038f2" (UID: "3dab2e78-e203-4f2e-9b13-a42f800038f2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:45:03 crc kubenswrapper[4747]: I1215 05:45:03.380176 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgk64\" (UniqueName: \"kubernetes.io/projected/3dab2e78-e203-4f2e-9b13-a42f800038f2-kube-api-access-pgk64\") on node \"crc\" DevicePath \"\"" Dec 15 05:45:03 crc kubenswrapper[4747]: I1215 05:45:03.380209 4747 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3dab2e78-e203-4f2e-9b13-a42f800038f2-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 15 05:45:03 crc kubenswrapper[4747]: I1215 05:45:03.380219 4747 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3dab2e78-e203-4f2e-9b13-a42f800038f2-config-volume\") on node \"crc\" DevicePath \"\"" Dec 15 05:45:03 crc kubenswrapper[4747]: I1215 05:45:03.898832 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29429625-64ffh" event={"ID":"3dab2e78-e203-4f2e-9b13-a42f800038f2","Type":"ContainerDied","Data":"011a252c8ea6ec920503b09c175f6966744df1def79f33a6900c968dbb7833f4"} Dec 15 05:45:03 crc kubenswrapper[4747]: I1215 05:45:03.898875 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="011a252c8ea6ec920503b09c175f6966744df1def79f33a6900c968dbb7833f4" Dec 15 05:45:03 crc kubenswrapper[4747]: I1215 05:45:03.898964 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29429625-64ffh" Dec 15 05:45:22 crc kubenswrapper[4747]: I1215 05:45:22.525152 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-t5flq"] Dec 15 05:45:22 crc kubenswrapper[4747]: E1215 05:45:22.526697 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dab2e78-e203-4f2e-9b13-a42f800038f2" containerName="collect-profiles" Dec 15 05:45:22 crc kubenswrapper[4747]: I1215 05:45:22.526772 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dab2e78-e203-4f2e-9b13-a42f800038f2" containerName="collect-profiles" Dec 15 05:45:22 crc kubenswrapper[4747]: I1215 05:45:22.526952 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dab2e78-e203-4f2e-9b13-a42f800038f2" containerName="collect-profiles" Dec 15 05:45:22 crc kubenswrapper[4747]: I1215 05:45:22.527451 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-t5flq" Dec 15 05:45:22 crc kubenswrapper[4747]: I1215 05:45:22.531975 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-bwhbl"] Dec 15 05:45:22 crc kubenswrapper[4747]: I1215 05:45:22.532774 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-bwhbl" Dec 15 05:45:22 crc kubenswrapper[4747]: I1215 05:45:22.535310 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-t5flq"] Dec 15 05:45:22 crc kubenswrapper[4747]: I1215 05:45:22.538678 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 15 05:45:22 crc kubenswrapper[4747]: I1215 05:45:22.539107 4747 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-8ffz4" Dec 15 05:45:22 crc kubenswrapper[4747]: I1215 05:45:22.543497 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-qmrz8"] Dec 15 05:45:22 crc kubenswrapper[4747]: I1215 05:45:22.544063 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 15 05:45:22 crc kubenswrapper[4747]: I1215 05:45:22.544255 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-qmrz8" Dec 15 05:45:22 crc kubenswrapper[4747]: I1215 05:45:22.544353 4747 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-6kggv" Dec 15 05:45:22 crc kubenswrapper[4747]: I1215 05:45:22.546358 4747 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-nsqjk" Dec 15 05:45:22 crc kubenswrapper[4747]: I1215 05:45:22.549064 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-bwhbl"] Dec 15 05:45:22 crc kubenswrapper[4747]: I1215 05:45:22.553245 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-qmrz8"] Dec 15 05:45:22 crc kubenswrapper[4747]: I1215 05:45:22.607997 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqdzd\" (UniqueName: \"kubernetes.io/projected/56932a48-4e8d-4052-b33e-daff9aeec190-kube-api-access-hqdzd\") pod \"cert-manager-webhook-5655c58dd6-qmrz8\" (UID: \"56932a48-4e8d-4052-b33e-daff9aeec190\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-qmrz8" Dec 15 05:45:22 crc kubenswrapper[4747]: I1215 05:45:22.608395 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2kkl\" (UniqueName: \"kubernetes.io/projected/7e3304fa-a54c-4472-935a-aad6d8673d12-kube-api-access-d2kkl\") pod \"cert-manager-cainjector-7f985d654d-t5flq\" (UID: \"7e3304fa-a54c-4472-935a-aad6d8673d12\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-t5flq" Dec 15 05:45:22 crc kubenswrapper[4747]: I1215 05:45:22.608508 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prnmq\" (UniqueName: \"kubernetes.io/projected/2dc6869c-9693-4dc8-81eb-4ff08e334aaf-kube-api-access-prnmq\") pod \"cert-manager-5b446d88c5-bwhbl\" (UID: \"2dc6869c-9693-4dc8-81eb-4ff08e334aaf\") " pod="cert-manager/cert-manager-5b446d88c5-bwhbl" Dec 15 05:45:22 crc kubenswrapper[4747]: I1215 05:45:22.709312 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2kkl\" (UniqueName: \"kubernetes.io/projected/7e3304fa-a54c-4472-935a-aad6d8673d12-kube-api-access-d2kkl\") pod \"cert-manager-cainjector-7f985d654d-t5flq\" (UID: \"7e3304fa-a54c-4472-935a-aad6d8673d12\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-t5flq" Dec 15 05:45:22 crc kubenswrapper[4747]: I1215 05:45:22.709354 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prnmq\" (UniqueName: \"kubernetes.io/projected/2dc6869c-9693-4dc8-81eb-4ff08e334aaf-kube-api-access-prnmq\") pod \"cert-manager-5b446d88c5-bwhbl\" (UID: \"2dc6869c-9693-4dc8-81eb-4ff08e334aaf\") " pod="cert-manager/cert-manager-5b446d88c5-bwhbl" Dec 15 05:45:22 crc kubenswrapper[4747]: I1215 05:45:22.709492 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqdzd\" (UniqueName: \"kubernetes.io/projected/56932a48-4e8d-4052-b33e-daff9aeec190-kube-api-access-hqdzd\") pod \"cert-manager-webhook-5655c58dd6-qmrz8\" (UID: \"56932a48-4e8d-4052-b33e-daff9aeec190\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-qmrz8" Dec 15 05:45:22 crc kubenswrapper[4747]: I1215 05:45:22.727380 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2kkl\" (UniqueName: \"kubernetes.io/projected/7e3304fa-a54c-4472-935a-aad6d8673d12-kube-api-access-d2kkl\") pod \"cert-manager-cainjector-7f985d654d-t5flq\" (UID: \"7e3304fa-a54c-4472-935a-aad6d8673d12\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-t5flq" Dec 15 05:45:22 crc kubenswrapper[4747]: I1215 05:45:22.727433 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prnmq\" (UniqueName: \"kubernetes.io/projected/2dc6869c-9693-4dc8-81eb-4ff08e334aaf-kube-api-access-prnmq\") pod \"cert-manager-5b446d88c5-bwhbl\" (UID: \"2dc6869c-9693-4dc8-81eb-4ff08e334aaf\") " pod="cert-manager/cert-manager-5b446d88c5-bwhbl" Dec 15 05:45:22 crc kubenswrapper[4747]: I1215 05:45:22.728760 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqdzd\" (UniqueName: \"kubernetes.io/projected/56932a48-4e8d-4052-b33e-daff9aeec190-kube-api-access-hqdzd\") pod \"cert-manager-webhook-5655c58dd6-qmrz8\" (UID: \"56932a48-4e8d-4052-b33e-daff9aeec190\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-qmrz8" Dec 15 05:45:22 crc kubenswrapper[4747]: I1215 05:45:22.842344 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-t5flq" Dec 15 05:45:22 crc kubenswrapper[4747]: I1215 05:45:22.847634 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-bwhbl" Dec 15 05:45:22 crc kubenswrapper[4747]: I1215 05:45:22.856526 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-qmrz8" Dec 15 05:45:23 crc kubenswrapper[4747]: I1215 05:45:23.059455 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-qmrz8"] Dec 15 05:45:23 crc kubenswrapper[4747]: W1215 05:45:23.065887 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56932a48_4e8d_4052_b33e_daff9aeec190.slice/crio-db049b41928d1f06d9bb82ac0855f9531c747b5467467370044e21ba76f7bec1 WatchSource:0}: Error finding container db049b41928d1f06d9bb82ac0855f9531c747b5467467370044e21ba76f7bec1: Status 404 returned error can't find the container with id db049b41928d1f06d9bb82ac0855f9531c747b5467467370044e21ba76f7bec1 Dec 15 05:45:23 crc kubenswrapper[4747]: I1215 05:45:23.068912 4747 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 15 05:45:23 crc kubenswrapper[4747]: I1215 05:45:23.077805 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-bwhbl"] Dec 15 05:45:23 crc kubenswrapper[4747]: W1215 05:45:23.079417 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2dc6869c_9693_4dc8_81eb_4ff08e334aaf.slice/crio-edef5d4352b37353017eb0cf43322b968d515418aecd16d1ff458d6f2f851666 WatchSource:0}: Error finding container edef5d4352b37353017eb0cf43322b968d515418aecd16d1ff458d6f2f851666: Status 404 returned error can't find the container with id edef5d4352b37353017eb0cf43322b968d515418aecd16d1ff458d6f2f851666 Dec 15 05:45:23 crc kubenswrapper[4747]: I1215 05:45:23.236142 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-t5flq"] Dec 15 05:45:23 crc kubenswrapper[4747]: W1215 05:45:23.244430 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e3304fa_a54c_4472_935a_aad6d8673d12.slice/crio-f786b2b5220a491d9937b8fc7dab117867620b972bc8a8bb0558b32ca1ef2224 WatchSource:0}: Error finding container f786b2b5220a491d9937b8fc7dab117867620b972bc8a8bb0558b32ca1ef2224: Status 404 returned error can't find the container with id f786b2b5220a491d9937b8fc7dab117867620b972bc8a8bb0558b32ca1ef2224 Dec 15 05:45:24 crc kubenswrapper[4747]: I1215 05:45:24.023100 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-t5flq" event={"ID":"7e3304fa-a54c-4472-935a-aad6d8673d12","Type":"ContainerStarted","Data":"f786b2b5220a491d9937b8fc7dab117867620b972bc8a8bb0558b32ca1ef2224"} Dec 15 05:45:24 crc kubenswrapper[4747]: I1215 05:45:24.025344 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-qmrz8" event={"ID":"56932a48-4e8d-4052-b33e-daff9aeec190","Type":"ContainerStarted","Data":"db049b41928d1f06d9bb82ac0855f9531c747b5467467370044e21ba76f7bec1"} Dec 15 05:45:24 crc kubenswrapper[4747]: I1215 05:45:24.026567 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-bwhbl" event={"ID":"2dc6869c-9693-4dc8-81eb-4ff08e334aaf","Type":"ContainerStarted","Data":"edef5d4352b37353017eb0cf43322b968d515418aecd16d1ff458d6f2f851666"} Dec 15 05:45:26 crc kubenswrapper[4747]: I1215 05:45:26.038677 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-t5flq" event={"ID":"7e3304fa-a54c-4472-935a-aad6d8673d12","Type":"ContainerStarted","Data":"89acde0656c40d4144af4f407fade7bf2853b68691639524fc56664aa1131e7d"} Dec 15 05:45:26 crc kubenswrapper[4747]: I1215 05:45:26.040965 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-qmrz8" event={"ID":"56932a48-4e8d-4052-b33e-daff9aeec190","Type":"ContainerStarted","Data":"2dcc826d3eb2fa3117cfbebeee07c62a2f25b96bec56c73a7d362c8c5b2ddd79"} Dec 15 05:45:26 crc kubenswrapper[4747]: I1215 05:45:26.041166 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-qmrz8" Dec 15 05:45:26 crc kubenswrapper[4747]: I1215 05:45:26.043095 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-bwhbl" event={"ID":"2dc6869c-9693-4dc8-81eb-4ff08e334aaf","Type":"ContainerStarted","Data":"16efae0921f12f8e370a214b16ba185ab15a28adc823738acd4360aa5a32a725"} Dec 15 05:45:26 crc kubenswrapper[4747]: I1215 05:45:26.073171 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-t5flq" podStartSLOduration=1.558380697 podStartE2EDuration="4.073143809s" podCreationTimestamp="2025-12-15 05:45:22 +0000 UTC" firstStartedPulling="2025-12-15 05:45:23.24698988 +0000 UTC m=+486.943501797" lastFinishedPulling="2025-12-15 05:45:25.761753003 +0000 UTC m=+489.458264909" observedRunningTime="2025-12-15 05:45:26.058700539 +0000 UTC m=+489.755212456" watchObservedRunningTime="2025-12-15 05:45:26.073143809 +0000 UTC m=+489.769655727" Dec 15 05:45:26 crc kubenswrapper[4747]: I1215 05:45:26.077368 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-qmrz8" podStartSLOduration=1.4189712380000001 podStartE2EDuration="4.077356441s" podCreationTimestamp="2025-12-15 05:45:22 +0000 UTC" firstStartedPulling="2025-12-15 05:45:23.068589867 +0000 UTC m=+486.765101783" lastFinishedPulling="2025-12-15 05:45:25.726975069 +0000 UTC m=+489.423486986" observedRunningTime="2025-12-15 05:45:26.071589133 +0000 UTC m=+489.768101050" watchObservedRunningTime="2025-12-15 05:45:26.077356441 +0000 UTC m=+489.773868358" Dec 15 05:45:26 crc kubenswrapper[4747]: I1215 05:45:26.086829 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-bwhbl" podStartSLOduration=1.441966468 podStartE2EDuration="4.086803576s" podCreationTimestamp="2025-12-15 05:45:22 +0000 UTC" firstStartedPulling="2025-12-15 05:45:23.082339982 +0000 UTC m=+486.778851899" lastFinishedPulling="2025-12-15 05:45:25.72717709 +0000 UTC m=+489.423689007" observedRunningTime="2025-12-15 05:45:26.08649745 +0000 UTC m=+489.783009367" watchObservedRunningTime="2025-12-15 05:45:26.086803576 +0000 UTC m=+489.783315492" Dec 15 05:45:32 crc kubenswrapper[4747]: I1215 05:45:32.860290 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-qmrz8" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.131556 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-82lhw"] Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.131912 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerName="ovn-controller" containerID="cri-o://34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8" gracePeriod=30 Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.132028 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b" gracePeriod=30 Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.132069 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerName="sbdb" containerID="cri-o://d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674" gracePeriod=30 Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.132079 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerName="kube-rbac-proxy-node" containerID="cri-o://75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b" gracePeriod=30 Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.132155 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerName="ovn-acl-logging" containerID="cri-o://d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75" gracePeriod=30 Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.132005 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerName="nbdb" containerID="cri-o://fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928" gracePeriod=30 Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.132108 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerName="northd" containerID="cri-o://cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170" gracePeriod=30 Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.171163 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerName="ovnkube-controller" containerID="cri-o://5c050824b70702ea79444e67ed807831c7d2bc246cfc813af6d50191cf4bd56d" gracePeriod=30 Dec 15 05:45:34 crc kubenswrapper[4747]: E1215 05:45:34.201562 4747 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5c050824b70702ea79444e67ed807831c7d2bc246cfc813af6d50191cf4bd56d is running failed: container process not found" containerID="5c050824b70702ea79444e67ed807831c7d2bc246cfc813af6d50191cf4bd56d" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Dec 15 05:45:34 crc kubenswrapper[4747]: E1215 05:45:34.202013 4747 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5c050824b70702ea79444e67ed807831c7d2bc246cfc813af6d50191cf4bd56d is running failed: container process not found" containerID="5c050824b70702ea79444e67ed807831c7d2bc246cfc813af6d50191cf4bd56d" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Dec 15 05:45:34 crc kubenswrapper[4747]: E1215 05:45:34.202410 4747 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5c050824b70702ea79444e67ed807831c7d2bc246cfc813af6d50191cf4bd56d is running failed: container process not found" containerID="5c050824b70702ea79444e67ed807831c7d2bc246cfc813af6d50191cf4bd56d" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Dec 15 05:45:34 crc kubenswrapper[4747]: E1215 05:45:34.202498 4747 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5c050824b70702ea79444e67ed807831c7d2bc246cfc813af6d50191cf4bd56d is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerName="ovnkube-controller" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.394216 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-82lhw_2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7/ovnkube-controller/3.log" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.396434 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-82lhw_2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7/ovn-acl-logging/0.log" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.396973 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-82lhw_2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7/ovn-controller/0.log" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.397400 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.447527 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-t7qc9"] Dec 15 05:45:34 crc kubenswrapper[4747]: E1215 05:45:34.447895 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerName="kube-rbac-proxy-node" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.447911 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerName="kube-rbac-proxy-node" Dec 15 05:45:34 crc kubenswrapper[4747]: E1215 05:45:34.447941 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerName="ovnkube-controller" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.447948 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerName="ovnkube-controller" Dec 15 05:45:34 crc kubenswrapper[4747]: E1215 05:45:34.447958 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerName="kube-rbac-proxy-ovn-metrics" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.447965 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerName="kube-rbac-proxy-ovn-metrics" Dec 15 05:45:34 crc kubenswrapper[4747]: E1215 05:45:34.447975 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerName="kubecfg-setup" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.447981 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerName="kubecfg-setup" Dec 15 05:45:34 crc kubenswrapper[4747]: E1215 05:45:34.447996 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerName="ovn-acl-logging" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.448003 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerName="ovn-acl-logging" Dec 15 05:45:34 crc kubenswrapper[4747]: E1215 05:45:34.448013 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerName="ovnkube-controller" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.448019 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerName="ovnkube-controller" Dec 15 05:45:34 crc kubenswrapper[4747]: E1215 05:45:34.448030 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerName="nbdb" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.448036 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerName="nbdb" Dec 15 05:45:34 crc kubenswrapper[4747]: E1215 05:45:34.448051 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerName="ovn-controller" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.448057 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerName="ovn-controller" Dec 15 05:45:34 crc kubenswrapper[4747]: E1215 05:45:34.448065 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerName="northd" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.448071 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerName="northd" Dec 15 05:45:34 crc kubenswrapper[4747]: E1215 05:45:34.448078 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerName="ovnkube-controller" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.448083 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerName="ovnkube-controller" Dec 15 05:45:34 crc kubenswrapper[4747]: E1215 05:45:34.448091 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerName="sbdb" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.448096 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerName="sbdb" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.448239 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerName="ovnkube-controller" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.448249 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerName="kube-rbac-proxy-node" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.448256 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerName="ovn-acl-logging" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.448263 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerName="ovnkube-controller" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.448272 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerName="ovn-controller" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.448281 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerName="kube-rbac-proxy-ovn-metrics" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.448287 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerName="sbdb" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.448294 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerName="nbdb" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.448300 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerName="ovnkube-controller" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.448305 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerName="northd" Dec 15 05:45:34 crc kubenswrapper[4747]: E1215 05:45:34.448471 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerName="ovnkube-controller" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.448479 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerName="ovnkube-controller" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.448621 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerName="ovnkube-controller" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.448630 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerName="ovnkube-controller" Dec 15 05:45:34 crc kubenswrapper[4747]: E1215 05:45:34.448760 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerName="ovnkube-controller" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.448765 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerName="ovnkube-controller" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.450867 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.541259 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-host-slash\") pod \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.541340 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-host-cni-bin\") pod \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.541383 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-host-kubelet\") pod \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.541406 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-run-ovn\") pod \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.541423 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-run-openvswitch\") pod \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.541441 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-host-slash" (OuterVolumeSpecName: "host-slash") pod "2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" (UID: "2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.541478 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-ovnkube-script-lib\") pod \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.541502 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwzq6\" (UniqueName: \"kubernetes.io/projected/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-kube-api-access-zwzq6\") pod \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.541517 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" (UID: "2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.541550 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" (UID: "2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.541554 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-host-cni-netd\") pod \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.541598 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" (UID: "2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.541620 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-systemd-units\") pod \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.541630 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" (UID: "2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.541652 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" (UID: "2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.541652 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" (UID: "2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.541709 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.541751 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" (UID: "2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.542074 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-etc-openvswitch\") pod \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.542123 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-var-lib-openvswitch\") pod \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.542160 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-host-run-ovn-kubernetes\") pod \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.542205 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" (UID: "2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.542216 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-ovn-node-metrics-cert\") pod \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.542245 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-node-log\") pod \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.542256 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" (UID: "2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.542270 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-host-run-netns\") pod \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.542276 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" (UID: "2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.542292 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" (UID: "2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.542299 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-env-overrides\") pod \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.542310 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-node-log" (OuterVolumeSpecName: "node-log") pod "2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" (UID: "2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.542324 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" (UID: "2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.542327 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-run-systemd\") pod \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.542384 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-ovnkube-config\") pod \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.542407 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-log-socket\") pod \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\" (UID: \"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7\") " Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.542633 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-log-socket" (OuterVolumeSpecName: "log-socket") pod "2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" (UID: "2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.542650 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c6e29be2-511d-41dc-9150-7d682de8d5f2-ovn-node-metrics-cert\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.542733 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c6e29be2-511d-41dc-9150-7d682de8d5f2-systemd-units\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.542772 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6e29be2-511d-41dc-9150-7d682de8d5f2-var-lib-openvswitch\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.542805 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6e29be2-511d-41dc-9150-7d682de8d5f2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.542855 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c6e29be2-511d-41dc-9150-7d682de8d5f2-host-cni-bin\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.543217 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c6e29be2-511d-41dc-9150-7d682de8d5f2-node-log\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.543242 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6e29be2-511d-41dc-9150-7d682de8d5f2-host-cni-netd\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.543292 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c6e29be2-511d-41dc-9150-7d682de8d5f2-env-overrides\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.543314 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c6e29be2-511d-41dc-9150-7d682de8d5f2-ovnkube-config\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.543342 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c6e29be2-511d-41dc-9150-7d682de8d5f2-host-kubelet\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.543367 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll2mg\" (UniqueName: \"kubernetes.io/projected/c6e29be2-511d-41dc-9150-7d682de8d5f2-kube-api-access-ll2mg\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.542855 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" (UID: "2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.543393 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6e29be2-511d-41dc-9150-7d682de8d5f2-run-openvswitch\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.542919 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" (UID: "2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.543537 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c6e29be2-511d-41dc-9150-7d682de8d5f2-log-socket\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.543576 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c6e29be2-511d-41dc-9150-7d682de8d5f2-host-run-netns\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.543647 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6e29be2-511d-41dc-9150-7d682de8d5f2-host-run-ovn-kubernetes\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.543671 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c6e29be2-511d-41dc-9150-7d682de8d5f2-ovnkube-script-lib\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.543693 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6e29be2-511d-41dc-9150-7d682de8d5f2-etc-openvswitch\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.543718 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c6e29be2-511d-41dc-9150-7d682de8d5f2-run-ovn\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.543737 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c6e29be2-511d-41dc-9150-7d682de8d5f2-run-systemd\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.543832 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c6e29be2-511d-41dc-9150-7d682de8d5f2-host-slash\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.544112 4747 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.544134 4747 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.544145 4747 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.544155 4747 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.544164 4747 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.544174 4747 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.544185 4747 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-node-log\") on node \"crc\" DevicePath \"\"" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.544196 4747 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.544205 4747 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.544215 4747 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.544222 4747 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-log-socket\") on node \"crc\" DevicePath \"\"" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.544230 4747 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-host-slash\") on node \"crc\" DevicePath \"\"" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.544242 4747 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.544251 4747 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.544260 4747 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.544268 4747 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.544277 4747 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.547871 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" (UID: "2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.548185 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-kube-api-access-zwzq6" (OuterVolumeSpecName: "kube-api-access-zwzq6") pod "2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" (UID: "2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7"). InnerVolumeSpecName "kube-api-access-zwzq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.555789 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" (UID: "2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.645678 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c6e29be2-511d-41dc-9150-7d682de8d5f2-host-slash\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.644903 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c6e29be2-511d-41dc-9150-7d682de8d5f2-host-slash\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.646542 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c6e29be2-511d-41dc-9150-7d682de8d5f2-ovn-node-metrics-cert\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.646585 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c6e29be2-511d-41dc-9150-7d682de8d5f2-systemd-units\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.646625 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6e29be2-511d-41dc-9150-7d682de8d5f2-var-lib-openvswitch\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.646650 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6e29be2-511d-41dc-9150-7d682de8d5f2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.646671 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c6e29be2-511d-41dc-9150-7d682de8d5f2-systemd-units\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.646705 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c6e29be2-511d-41dc-9150-7d682de8d5f2-host-cni-bin\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.646734 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c6e29be2-511d-41dc-9150-7d682de8d5f2-node-log\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.646720 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6e29be2-511d-41dc-9150-7d682de8d5f2-var-lib-openvswitch\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.646762 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6e29be2-511d-41dc-9150-7d682de8d5f2-host-cni-netd\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.646801 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6e29be2-511d-41dc-9150-7d682de8d5f2-host-cni-netd\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.646810 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6e29be2-511d-41dc-9150-7d682de8d5f2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.646828 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c6e29be2-511d-41dc-9150-7d682de8d5f2-host-cni-bin\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.646841 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c6e29be2-511d-41dc-9150-7d682de8d5f2-node-log\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.646996 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c6e29be2-511d-41dc-9150-7d682de8d5f2-env-overrides\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.647045 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c6e29be2-511d-41dc-9150-7d682de8d5f2-ovnkube-config\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.647119 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c6e29be2-511d-41dc-9150-7d682de8d5f2-host-kubelet\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.647147 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll2mg\" (UniqueName: \"kubernetes.io/projected/c6e29be2-511d-41dc-9150-7d682de8d5f2-kube-api-access-ll2mg\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.647181 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6e29be2-511d-41dc-9150-7d682de8d5f2-run-openvswitch\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.647248 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c6e29be2-511d-41dc-9150-7d682de8d5f2-log-socket\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.647276 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c6e29be2-511d-41dc-9150-7d682de8d5f2-host-run-netns\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.647384 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c6e29be2-511d-41dc-9150-7d682de8d5f2-host-kubelet\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.647603 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6e29be2-511d-41dc-9150-7d682de8d5f2-run-openvswitch\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.647658 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c6e29be2-511d-41dc-9150-7d682de8d5f2-env-overrides\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.647677 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6e29be2-511d-41dc-9150-7d682de8d5f2-host-run-ovn-kubernetes\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.647685 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c6e29be2-511d-41dc-9150-7d682de8d5f2-host-run-netns\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.647709 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c6e29be2-511d-41dc-9150-7d682de8d5f2-host-run-ovn-kubernetes\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.647710 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c6e29be2-511d-41dc-9150-7d682de8d5f2-log-socket\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.647775 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c6e29be2-511d-41dc-9150-7d682de8d5f2-ovnkube-script-lib\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.647969 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c6e29be2-511d-41dc-9150-7d682de8d5f2-ovnkube-config\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.648463 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c6e29be2-511d-41dc-9150-7d682de8d5f2-ovnkube-script-lib\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.648631 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6e29be2-511d-41dc-9150-7d682de8d5f2-etc-openvswitch\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.648662 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c6e29be2-511d-41dc-9150-7d682de8d5f2-etc-openvswitch\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.648693 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c6e29be2-511d-41dc-9150-7d682de8d5f2-run-ovn\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.648725 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c6e29be2-511d-41dc-9150-7d682de8d5f2-run-systemd\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.648770 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c6e29be2-511d-41dc-9150-7d682de8d5f2-run-ovn\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.648837 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwzq6\" (UniqueName: \"kubernetes.io/projected/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-kube-api-access-zwzq6\") on node \"crc\" DevicePath \"\"" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.648852 4747 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.648854 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c6e29be2-511d-41dc-9150-7d682de8d5f2-run-systemd\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.648863 4747 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.650474 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c6e29be2-511d-41dc-9150-7d682de8d5f2-ovn-node-metrics-cert\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.661116 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll2mg\" (UniqueName: \"kubernetes.io/projected/c6e29be2-511d-41dc-9150-7d682de8d5f2-kube-api-access-ll2mg\") pod \"ovnkube-node-t7qc9\" (UID: \"c6e29be2-511d-41dc-9150-7d682de8d5f2\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:34 crc kubenswrapper[4747]: I1215 05:45:34.766554 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.099172 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gmfps_89350c5d-9a77-499e-81ec-376b012cc219/kube-multus/2.log" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.099752 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gmfps_89350c5d-9a77-499e-81ec-376b012cc219/kube-multus/1.log" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.100034 4747 generic.go:334] "Generic (PLEG): container finished" podID="89350c5d-9a77-499e-81ec-376b012cc219" containerID="eb1f5c773253872e7b72eb3d6d8dfb1affde066a8618f8d9fe96d1cb3254c5e1" exitCode=2 Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.100126 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gmfps" event={"ID":"89350c5d-9a77-499e-81ec-376b012cc219","Type":"ContainerDied","Data":"eb1f5c773253872e7b72eb3d6d8dfb1affde066a8618f8d9fe96d1cb3254c5e1"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.100207 4747 scope.go:117] "RemoveContainer" containerID="bf7e29913438085594b529ef0499bebcb5d59f0027e5c46d493eb0316c2c553c" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.100779 4747 scope.go:117] "RemoveContainer" containerID="eb1f5c773253872e7b72eb3d6d8dfb1affde066a8618f8d9fe96d1cb3254c5e1" Dec 15 05:45:35 crc kubenswrapper[4747]: E1215 05:45:35.101182 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-gmfps_openshift-multus(89350c5d-9a77-499e-81ec-376b012cc219)\"" pod="openshift-multus/multus-gmfps" podUID="89350c5d-9a77-499e-81ec-376b012cc219" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.101702 4747 generic.go:334] "Generic (PLEG): container finished" podID="c6e29be2-511d-41dc-9150-7d682de8d5f2" containerID="75240006a67d66e8c24b53ca01dcfcc667c2985a3930b5b9daa3168c0ee6073d" exitCode=0 Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.101988 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" event={"ID":"c6e29be2-511d-41dc-9150-7d682de8d5f2","Type":"ContainerDied","Data":"75240006a67d66e8c24b53ca01dcfcc667c2985a3930b5b9daa3168c0ee6073d"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.102041 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" event={"ID":"c6e29be2-511d-41dc-9150-7d682de8d5f2","Type":"ContainerStarted","Data":"203d0a080a8255569d33b87e2d3a909f0231be98da42aaaf9bbab2ec46aaf12a"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.104501 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-82lhw_2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7/ovnkube-controller/3.log" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.107419 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-82lhw_2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7/ovn-acl-logging/0.log" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108002 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-82lhw_2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7/ovn-controller/0.log" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108399 4747 generic.go:334] "Generic (PLEG): container finished" podID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerID="5c050824b70702ea79444e67ed807831c7d2bc246cfc813af6d50191cf4bd56d" exitCode=0 Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108424 4747 generic.go:334] "Generic (PLEG): container finished" podID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerID="d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674" exitCode=0 Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108434 4747 generic.go:334] "Generic (PLEG): container finished" podID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerID="fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928" exitCode=0 Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108443 4747 generic.go:334] "Generic (PLEG): container finished" podID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerID="cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170" exitCode=0 Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108460 4747 generic.go:334] "Generic (PLEG): container finished" podID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerID="dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b" exitCode=0 Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108468 4747 generic.go:334] "Generic (PLEG): container finished" podID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerID="75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b" exitCode=0 Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108441 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" event={"ID":"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7","Type":"ContainerDied","Data":"5c050824b70702ea79444e67ed807831c7d2bc246cfc813af6d50191cf4bd56d"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108483 4747 generic.go:334] "Generic (PLEG): container finished" podID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerID="d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75" exitCode=143 Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108491 4747 generic.go:334] "Generic (PLEG): container finished" podID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" containerID="34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8" exitCode=143 Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108512 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" event={"ID":"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7","Type":"ContainerDied","Data":"d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108529 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" event={"ID":"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7","Type":"ContainerDied","Data":"fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108537 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108557 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" event={"ID":"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7","Type":"ContainerDied","Data":"cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108572 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" event={"ID":"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7","Type":"ContainerDied","Data":"dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108584 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" event={"ID":"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7","Type":"ContainerDied","Data":"75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108598 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5c050824b70702ea79444e67ed807831c7d2bc246cfc813af6d50191cf4bd56d"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108615 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"312af0941768922b2c2c313a7209f9271ef4df55fe0be3baa0ebc0d91637b955"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108637 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108645 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108653 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108659 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108664 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108670 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108676 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108682 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108690 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" event={"ID":"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7","Type":"ContainerDied","Data":"d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108716 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5c050824b70702ea79444e67ed807831c7d2bc246cfc813af6d50191cf4bd56d"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108723 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"312af0941768922b2c2c313a7209f9271ef4df55fe0be3baa0ebc0d91637b955"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108730 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108735 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108741 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108747 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108753 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108758 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108764 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108770 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108790 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" event={"ID":"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7","Type":"ContainerDied","Data":"34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108800 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5c050824b70702ea79444e67ed807831c7d2bc246cfc813af6d50191cf4bd56d"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108808 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"312af0941768922b2c2c313a7209f9271ef4df55fe0be3baa0ebc0d91637b955"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108815 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108823 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108829 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108835 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108840 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108847 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108864 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108872 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108882 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-82lhw" event={"ID":"2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7","Type":"ContainerDied","Data":"2251cfb228a94a76c54c7da530d41c6fd089ff40571247c2fd71b84610388940"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108892 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5c050824b70702ea79444e67ed807831c7d2bc246cfc813af6d50191cf4bd56d"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108899 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"312af0941768922b2c2c313a7209f9271ef4df55fe0be3baa0ebc0d91637b955"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108906 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108911 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108918 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108948 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108955 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108960 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108966 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.108971 4747 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec"} Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.132071 4747 scope.go:117] "RemoveContainer" containerID="5c050824b70702ea79444e67ed807831c7d2bc246cfc813af6d50191cf4bd56d" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.133269 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-82lhw"] Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.139448 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-82lhw"] Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.153302 4747 scope.go:117] "RemoveContainer" containerID="312af0941768922b2c2c313a7209f9271ef4df55fe0be3baa0ebc0d91637b955" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.172461 4747 scope.go:117] "RemoveContainer" containerID="d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.184898 4747 scope.go:117] "RemoveContainer" containerID="fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.195649 4747 scope.go:117] "RemoveContainer" containerID="cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.207070 4747 scope.go:117] "RemoveContainer" containerID="dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.223180 4747 scope.go:117] "RemoveContainer" containerID="75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.234004 4747 scope.go:117] "RemoveContainer" containerID="d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.277718 4747 scope.go:117] "RemoveContainer" containerID="34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.293717 4747 scope.go:117] "RemoveContainer" containerID="9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.309152 4747 scope.go:117] "RemoveContainer" containerID="5c050824b70702ea79444e67ed807831c7d2bc246cfc813af6d50191cf4bd56d" Dec 15 05:45:35 crc kubenswrapper[4747]: E1215 05:45:35.309522 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c050824b70702ea79444e67ed807831c7d2bc246cfc813af6d50191cf4bd56d\": container with ID starting with 5c050824b70702ea79444e67ed807831c7d2bc246cfc813af6d50191cf4bd56d not found: ID does not exist" containerID="5c050824b70702ea79444e67ed807831c7d2bc246cfc813af6d50191cf4bd56d" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.309556 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c050824b70702ea79444e67ed807831c7d2bc246cfc813af6d50191cf4bd56d"} err="failed to get container status \"5c050824b70702ea79444e67ed807831c7d2bc246cfc813af6d50191cf4bd56d\": rpc error: code = NotFound desc = could not find container \"5c050824b70702ea79444e67ed807831c7d2bc246cfc813af6d50191cf4bd56d\": container with ID starting with 5c050824b70702ea79444e67ed807831c7d2bc246cfc813af6d50191cf4bd56d not found: ID does not exist" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.309587 4747 scope.go:117] "RemoveContainer" containerID="312af0941768922b2c2c313a7209f9271ef4df55fe0be3baa0ebc0d91637b955" Dec 15 05:45:35 crc kubenswrapper[4747]: E1215 05:45:35.309872 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"312af0941768922b2c2c313a7209f9271ef4df55fe0be3baa0ebc0d91637b955\": container with ID starting with 312af0941768922b2c2c313a7209f9271ef4df55fe0be3baa0ebc0d91637b955 not found: ID does not exist" containerID="312af0941768922b2c2c313a7209f9271ef4df55fe0be3baa0ebc0d91637b955" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.309899 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"312af0941768922b2c2c313a7209f9271ef4df55fe0be3baa0ebc0d91637b955"} err="failed to get container status \"312af0941768922b2c2c313a7209f9271ef4df55fe0be3baa0ebc0d91637b955\": rpc error: code = NotFound desc = could not find container \"312af0941768922b2c2c313a7209f9271ef4df55fe0be3baa0ebc0d91637b955\": container with ID starting with 312af0941768922b2c2c313a7209f9271ef4df55fe0be3baa0ebc0d91637b955 not found: ID does not exist" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.309917 4747 scope.go:117] "RemoveContainer" containerID="d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674" Dec 15 05:45:35 crc kubenswrapper[4747]: E1215 05:45:35.310221 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674\": container with ID starting with d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674 not found: ID does not exist" containerID="d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.310242 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674"} err="failed to get container status \"d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674\": rpc error: code = NotFound desc = could not find container \"d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674\": container with ID starting with d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674 not found: ID does not exist" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.310256 4747 scope.go:117] "RemoveContainer" containerID="fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928" Dec 15 05:45:35 crc kubenswrapper[4747]: E1215 05:45:35.310484 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928\": container with ID starting with fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928 not found: ID does not exist" containerID="fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.310499 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928"} err="failed to get container status \"fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928\": rpc error: code = NotFound desc = could not find container \"fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928\": container with ID starting with fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928 not found: ID does not exist" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.310515 4747 scope.go:117] "RemoveContainer" containerID="cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170" Dec 15 05:45:35 crc kubenswrapper[4747]: E1215 05:45:35.310772 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170\": container with ID starting with cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170 not found: ID does not exist" containerID="cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.310787 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170"} err="failed to get container status \"cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170\": rpc error: code = NotFound desc = could not find container \"cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170\": container with ID starting with cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170 not found: ID does not exist" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.310799 4747 scope.go:117] "RemoveContainer" containerID="dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b" Dec 15 05:45:35 crc kubenswrapper[4747]: E1215 05:45:35.311225 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b\": container with ID starting with dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b not found: ID does not exist" containerID="dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.311243 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b"} err="failed to get container status \"dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b\": rpc error: code = NotFound desc = could not find container \"dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b\": container with ID starting with dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b not found: ID does not exist" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.311256 4747 scope.go:117] "RemoveContainer" containerID="75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b" Dec 15 05:45:35 crc kubenswrapper[4747]: E1215 05:45:35.311485 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b\": container with ID starting with 75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b not found: ID does not exist" containerID="75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.311507 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b"} err="failed to get container status \"75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b\": rpc error: code = NotFound desc = could not find container \"75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b\": container with ID starting with 75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b not found: ID does not exist" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.311524 4747 scope.go:117] "RemoveContainer" containerID="d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75" Dec 15 05:45:35 crc kubenswrapper[4747]: E1215 05:45:35.311860 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75\": container with ID starting with d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75 not found: ID does not exist" containerID="d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.311896 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75"} err="failed to get container status \"d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75\": rpc error: code = NotFound desc = could not find container \"d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75\": container with ID starting with d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75 not found: ID does not exist" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.311915 4747 scope.go:117] "RemoveContainer" containerID="34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8" Dec 15 05:45:35 crc kubenswrapper[4747]: E1215 05:45:35.313103 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8\": container with ID starting with 34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8 not found: ID does not exist" containerID="34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.313126 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8"} err="failed to get container status \"34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8\": rpc error: code = NotFound desc = could not find container \"34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8\": container with ID starting with 34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8 not found: ID does not exist" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.313139 4747 scope.go:117] "RemoveContainer" containerID="9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec" Dec 15 05:45:35 crc kubenswrapper[4747]: E1215 05:45:35.313620 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\": container with ID starting with 9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec not found: ID does not exist" containerID="9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.313673 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec"} err="failed to get container status \"9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\": rpc error: code = NotFound desc = could not find container \"9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\": container with ID starting with 9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec not found: ID does not exist" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.313710 4747 scope.go:117] "RemoveContainer" containerID="5c050824b70702ea79444e67ed807831c7d2bc246cfc813af6d50191cf4bd56d" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.314301 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c050824b70702ea79444e67ed807831c7d2bc246cfc813af6d50191cf4bd56d"} err="failed to get container status \"5c050824b70702ea79444e67ed807831c7d2bc246cfc813af6d50191cf4bd56d\": rpc error: code = NotFound desc = could not find container \"5c050824b70702ea79444e67ed807831c7d2bc246cfc813af6d50191cf4bd56d\": container with ID starting with 5c050824b70702ea79444e67ed807831c7d2bc246cfc813af6d50191cf4bd56d not found: ID does not exist" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.314343 4747 scope.go:117] "RemoveContainer" containerID="312af0941768922b2c2c313a7209f9271ef4df55fe0be3baa0ebc0d91637b955" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.314658 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"312af0941768922b2c2c313a7209f9271ef4df55fe0be3baa0ebc0d91637b955"} err="failed to get container status \"312af0941768922b2c2c313a7209f9271ef4df55fe0be3baa0ebc0d91637b955\": rpc error: code = NotFound desc = could not find container \"312af0941768922b2c2c313a7209f9271ef4df55fe0be3baa0ebc0d91637b955\": container with ID starting with 312af0941768922b2c2c313a7209f9271ef4df55fe0be3baa0ebc0d91637b955 not found: ID does not exist" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.314685 4747 scope.go:117] "RemoveContainer" containerID="d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.315038 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674"} err="failed to get container status \"d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674\": rpc error: code = NotFound desc = could not find container \"d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674\": container with ID starting with d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674 not found: ID does not exist" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.315070 4747 scope.go:117] "RemoveContainer" containerID="fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.315379 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928"} err="failed to get container status \"fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928\": rpc error: code = NotFound desc = could not find container \"fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928\": container with ID starting with fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928 not found: ID does not exist" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.315401 4747 scope.go:117] "RemoveContainer" containerID="cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.315843 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170"} err="failed to get container status \"cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170\": rpc error: code = NotFound desc = could not find container \"cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170\": container with ID starting with cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170 not found: ID does not exist" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.315875 4747 scope.go:117] "RemoveContainer" containerID="dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.316176 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b"} err="failed to get container status \"dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b\": rpc error: code = NotFound desc = could not find container \"dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b\": container with ID starting with dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b not found: ID does not exist" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.316200 4747 scope.go:117] "RemoveContainer" containerID="75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.316669 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b"} err="failed to get container status \"75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b\": rpc error: code = NotFound desc = could not find container \"75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b\": container with ID starting with 75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b not found: ID does not exist" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.316700 4747 scope.go:117] "RemoveContainer" containerID="d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.316942 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75"} err="failed to get container status \"d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75\": rpc error: code = NotFound desc = could not find container \"d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75\": container with ID starting with d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75 not found: ID does not exist" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.316966 4747 scope.go:117] "RemoveContainer" containerID="34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.318067 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8"} err="failed to get container status \"34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8\": rpc error: code = NotFound desc = could not find container \"34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8\": container with ID starting with 34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8 not found: ID does not exist" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.318096 4747 scope.go:117] "RemoveContainer" containerID="9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.318465 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec"} err="failed to get container status \"9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\": rpc error: code = NotFound desc = could not find container \"9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\": container with ID starting with 9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec not found: ID does not exist" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.318504 4747 scope.go:117] "RemoveContainer" containerID="5c050824b70702ea79444e67ed807831c7d2bc246cfc813af6d50191cf4bd56d" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.319116 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c050824b70702ea79444e67ed807831c7d2bc246cfc813af6d50191cf4bd56d"} err="failed to get container status \"5c050824b70702ea79444e67ed807831c7d2bc246cfc813af6d50191cf4bd56d\": rpc error: code = NotFound desc = could not find container \"5c050824b70702ea79444e67ed807831c7d2bc246cfc813af6d50191cf4bd56d\": container with ID starting with 5c050824b70702ea79444e67ed807831c7d2bc246cfc813af6d50191cf4bd56d not found: ID does not exist" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.319156 4747 scope.go:117] "RemoveContainer" containerID="312af0941768922b2c2c313a7209f9271ef4df55fe0be3baa0ebc0d91637b955" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.320033 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"312af0941768922b2c2c313a7209f9271ef4df55fe0be3baa0ebc0d91637b955"} err="failed to get container status \"312af0941768922b2c2c313a7209f9271ef4df55fe0be3baa0ebc0d91637b955\": rpc error: code = NotFound desc = could not find container \"312af0941768922b2c2c313a7209f9271ef4df55fe0be3baa0ebc0d91637b955\": container with ID starting with 312af0941768922b2c2c313a7209f9271ef4df55fe0be3baa0ebc0d91637b955 not found: ID does not exist" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.320094 4747 scope.go:117] "RemoveContainer" containerID="d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.320412 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674"} err="failed to get container status \"d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674\": rpc error: code = NotFound desc = could not find container \"d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674\": container with ID starting with d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674 not found: ID does not exist" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.320440 4747 scope.go:117] "RemoveContainer" containerID="fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.320671 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928"} err="failed to get container status \"fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928\": rpc error: code = NotFound desc = could not find container \"fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928\": container with ID starting with fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928 not found: ID does not exist" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.320698 4747 scope.go:117] "RemoveContainer" containerID="cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.320912 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170"} err="failed to get container status \"cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170\": rpc error: code = NotFound desc = could not find container \"cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170\": container with ID starting with cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170 not found: ID does not exist" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.320967 4747 scope.go:117] "RemoveContainer" containerID="dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.321348 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b"} err="failed to get container status \"dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b\": rpc error: code = NotFound desc = could not find container \"dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b\": container with ID starting with dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b not found: ID does not exist" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.321373 4747 scope.go:117] "RemoveContainer" containerID="75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.321699 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b"} err="failed to get container status \"75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b\": rpc error: code = NotFound desc = could not find container \"75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b\": container with ID starting with 75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b not found: ID does not exist" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.321742 4747 scope.go:117] "RemoveContainer" containerID="d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.322058 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75"} err="failed to get container status \"d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75\": rpc error: code = NotFound desc = could not find container \"d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75\": container with ID starting with d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75 not found: ID does not exist" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.322081 4747 scope.go:117] "RemoveContainer" containerID="34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.322396 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8"} err="failed to get container status \"34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8\": rpc error: code = NotFound desc = could not find container \"34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8\": container with ID starting with 34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8 not found: ID does not exist" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.322417 4747 scope.go:117] "RemoveContainer" containerID="9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.322874 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec"} err="failed to get container status \"9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\": rpc error: code = NotFound desc = could not find container \"9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\": container with ID starting with 9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec not found: ID does not exist" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.322896 4747 scope.go:117] "RemoveContainer" containerID="5c050824b70702ea79444e67ed807831c7d2bc246cfc813af6d50191cf4bd56d" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.323458 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c050824b70702ea79444e67ed807831c7d2bc246cfc813af6d50191cf4bd56d"} err="failed to get container status \"5c050824b70702ea79444e67ed807831c7d2bc246cfc813af6d50191cf4bd56d\": rpc error: code = NotFound desc = could not find container \"5c050824b70702ea79444e67ed807831c7d2bc246cfc813af6d50191cf4bd56d\": container with ID starting with 5c050824b70702ea79444e67ed807831c7d2bc246cfc813af6d50191cf4bd56d not found: ID does not exist" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.323480 4747 scope.go:117] "RemoveContainer" containerID="312af0941768922b2c2c313a7209f9271ef4df55fe0be3baa0ebc0d91637b955" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.324755 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"312af0941768922b2c2c313a7209f9271ef4df55fe0be3baa0ebc0d91637b955"} err="failed to get container status \"312af0941768922b2c2c313a7209f9271ef4df55fe0be3baa0ebc0d91637b955\": rpc error: code = NotFound desc = could not find container \"312af0941768922b2c2c313a7209f9271ef4df55fe0be3baa0ebc0d91637b955\": container with ID starting with 312af0941768922b2c2c313a7209f9271ef4df55fe0be3baa0ebc0d91637b955 not found: ID does not exist" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.324779 4747 scope.go:117] "RemoveContainer" containerID="d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.325059 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674"} err="failed to get container status \"d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674\": rpc error: code = NotFound desc = could not find container \"d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674\": container with ID starting with d6d75a912854b4c925673dd33b59122dc4de90bb97a90fd6e3bb46f84905e674 not found: ID does not exist" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.325074 4747 scope.go:117] "RemoveContainer" containerID="fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.325312 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928"} err="failed to get container status \"fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928\": rpc error: code = NotFound desc = could not find container \"fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928\": container with ID starting with fc202fde29290ddacae2143ac64633c1087334f16b028e76929a5c43cb333928 not found: ID does not exist" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.325325 4747 scope.go:117] "RemoveContainer" containerID="cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.325574 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170"} err="failed to get container status \"cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170\": rpc error: code = NotFound desc = could not find container \"cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170\": container with ID starting with cdcc24400f9dab1692fdc444b2b409ed6d7e2712c2cfe48c32c039a38c9db170 not found: ID does not exist" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.325590 4747 scope.go:117] "RemoveContainer" containerID="dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.325831 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b"} err="failed to get container status \"dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b\": rpc error: code = NotFound desc = could not find container \"dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b\": container with ID starting with dd76dabf81163e580c4b64678249e90107e94aab4a92e3563fb0f404f6053f6b not found: ID does not exist" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.325849 4747 scope.go:117] "RemoveContainer" containerID="75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.326189 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b"} err="failed to get container status \"75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b\": rpc error: code = NotFound desc = could not find container \"75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b\": container with ID starting with 75ebcef45fe995a308de7a60ff663e1798434db0785a0dfa2d9d8ced2ab0017b not found: ID does not exist" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.326241 4747 scope.go:117] "RemoveContainer" containerID="d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.326569 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75"} err="failed to get container status \"d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75\": rpc error: code = NotFound desc = could not find container \"d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75\": container with ID starting with d59b9e6e8b50c4f7f2661e4c5f7893d86f70589f81092322e76ace52eade6b75 not found: ID does not exist" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.326590 4747 scope.go:117] "RemoveContainer" containerID="34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.326888 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8"} err="failed to get container status \"34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8\": rpc error: code = NotFound desc = could not find container \"34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8\": container with ID starting with 34db3b2d5de1c4b1ef08bfff80fb724214a8382f61e7ce9399b560223db857d8 not found: ID does not exist" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.326921 4747 scope.go:117] "RemoveContainer" containerID="9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec" Dec 15 05:45:35 crc kubenswrapper[4747]: I1215 05:45:35.327226 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec"} err="failed to get container status \"9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\": rpc error: code = NotFound desc = could not find container \"9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec\": container with ID starting with 9f4e15be1324d7a0fd04f0606fceb8ed5b82604dabac5f52d6e3efe982a266ec not found: ID does not exist" Dec 15 05:45:36 crc kubenswrapper[4747]: I1215 05:45:36.117054 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gmfps_89350c5d-9a77-499e-81ec-376b012cc219/kube-multus/2.log" Dec 15 05:45:36 crc kubenswrapper[4747]: I1215 05:45:36.120553 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" event={"ID":"c6e29be2-511d-41dc-9150-7d682de8d5f2","Type":"ContainerStarted","Data":"d3f3567f28cf207f16c660026bf74f87b1897ccaa52b638a19a7bcf5bdc9de0d"} Dec 15 05:45:36 crc kubenswrapper[4747]: I1215 05:45:36.120593 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" event={"ID":"c6e29be2-511d-41dc-9150-7d682de8d5f2","Type":"ContainerStarted","Data":"f46f1ff2239301bb91cda003e9bb16b98cd0d07d7d56308d758c52387458b107"} Dec 15 05:45:36 crc kubenswrapper[4747]: I1215 05:45:36.120608 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" event={"ID":"c6e29be2-511d-41dc-9150-7d682de8d5f2","Type":"ContainerStarted","Data":"010f4e166e7046152b0ef56b657509655680d84b8ac1fdcb000844d60d19cdff"} Dec 15 05:45:36 crc kubenswrapper[4747]: I1215 05:45:36.120620 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" event={"ID":"c6e29be2-511d-41dc-9150-7d682de8d5f2","Type":"ContainerStarted","Data":"a02506813572681b5f489d714a13841cfb9254058844ce389a41b12f5438066f"} Dec 15 05:45:36 crc kubenswrapper[4747]: I1215 05:45:36.120634 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" event={"ID":"c6e29be2-511d-41dc-9150-7d682de8d5f2","Type":"ContainerStarted","Data":"0c5671709aa64855a79cab2cfcc00d44513161f01fbd1c85470c2c269ba3ef4d"} Dec 15 05:45:36 crc kubenswrapper[4747]: I1215 05:45:36.120644 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" event={"ID":"c6e29be2-511d-41dc-9150-7d682de8d5f2","Type":"ContainerStarted","Data":"4abdf479a3e04a1cf32f741751bd8a6505b3a53b9e645efa1c7c1ae0b27c4b32"} Dec 15 05:45:36 crc kubenswrapper[4747]: I1215 05:45:36.636183 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7" path="/var/lib/kubelet/pods/2b2ee692-1e9a-49c0-b2f0-dfed89ebf7b7/volumes" Dec 15 05:45:38 crc kubenswrapper[4747]: I1215 05:45:38.136654 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" event={"ID":"c6e29be2-511d-41dc-9150-7d682de8d5f2","Type":"ContainerStarted","Data":"6b45fd878762b6630daeb650fc4d6d8566842476f4260d5a96656a79d4dc5e2d"} Dec 15 05:45:40 crc kubenswrapper[4747]: I1215 05:45:40.150807 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" event={"ID":"c6e29be2-511d-41dc-9150-7d682de8d5f2","Type":"ContainerStarted","Data":"2a30c7e00fbc89ea995fa0b9db27001059ebf2f072d8880a7384ac54e218f1d0"} Dec 15 05:45:40 crc kubenswrapper[4747]: I1215 05:45:40.151377 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:40 crc kubenswrapper[4747]: I1215 05:45:40.151391 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:40 crc kubenswrapper[4747]: I1215 05:45:40.174567 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" podStartSLOduration=6.174550341 podStartE2EDuration="6.174550341s" podCreationTimestamp="2025-12-15 05:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:45:40.172578238 +0000 UTC m=+503.869090155" watchObservedRunningTime="2025-12-15 05:45:40.174550341 +0000 UTC m=+503.871062258" Dec 15 05:45:40 crc kubenswrapper[4747]: I1215 05:45:40.176004 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:41 crc kubenswrapper[4747]: I1215 05:45:41.156803 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:41 crc kubenswrapper[4747]: I1215 05:45:41.184398 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:45:49 crc kubenswrapper[4747]: I1215 05:45:49.629687 4747 scope.go:117] "RemoveContainer" containerID="eb1f5c773253872e7b72eb3d6d8dfb1affde066a8618f8d9fe96d1cb3254c5e1" Dec 15 05:45:49 crc kubenswrapper[4747]: E1215 05:45:49.630534 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-gmfps_openshift-multus(89350c5d-9a77-499e-81ec-376b012cc219)\"" pod="openshift-multus/multus-gmfps" podUID="89350c5d-9a77-499e-81ec-376b012cc219" Dec 15 05:46:00 crc kubenswrapper[4747]: I1215 05:46:00.176037 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh"] Dec 15 05:46:00 crc kubenswrapper[4747]: I1215 05:46:00.177532 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh" Dec 15 05:46:00 crc kubenswrapper[4747]: I1215 05:46:00.182381 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 15 05:46:00 crc kubenswrapper[4747]: I1215 05:46:00.185172 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh"] Dec 15 05:46:00 crc kubenswrapper[4747]: I1215 05:46:00.355960 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/30696d2b-dd70-4eb7-88c1-9bc23b39c07c-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh\" (UID: \"30696d2b-dd70-4eb7-88c1-9bc23b39c07c\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh" Dec 15 05:46:00 crc kubenswrapper[4747]: I1215 05:46:00.356011 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2phv2\" (UniqueName: \"kubernetes.io/projected/30696d2b-dd70-4eb7-88c1-9bc23b39c07c-kube-api-access-2phv2\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh\" (UID: \"30696d2b-dd70-4eb7-88c1-9bc23b39c07c\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh" Dec 15 05:46:00 crc kubenswrapper[4747]: I1215 05:46:00.356087 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/30696d2b-dd70-4eb7-88c1-9bc23b39c07c-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh\" (UID: \"30696d2b-dd70-4eb7-88c1-9bc23b39c07c\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh" Dec 15 05:46:00 crc kubenswrapper[4747]: I1215 05:46:00.457722 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/30696d2b-dd70-4eb7-88c1-9bc23b39c07c-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh\" (UID: \"30696d2b-dd70-4eb7-88c1-9bc23b39c07c\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh" Dec 15 05:46:00 crc kubenswrapper[4747]: I1215 05:46:00.457774 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2phv2\" (UniqueName: \"kubernetes.io/projected/30696d2b-dd70-4eb7-88c1-9bc23b39c07c-kube-api-access-2phv2\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh\" (UID: \"30696d2b-dd70-4eb7-88c1-9bc23b39c07c\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh" Dec 15 05:46:00 crc kubenswrapper[4747]: I1215 05:46:00.457827 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/30696d2b-dd70-4eb7-88c1-9bc23b39c07c-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh\" (UID: \"30696d2b-dd70-4eb7-88c1-9bc23b39c07c\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh" Dec 15 05:46:00 crc kubenswrapper[4747]: I1215 05:46:00.458309 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/30696d2b-dd70-4eb7-88c1-9bc23b39c07c-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh\" (UID: \"30696d2b-dd70-4eb7-88c1-9bc23b39c07c\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh" Dec 15 05:46:00 crc kubenswrapper[4747]: I1215 05:46:00.458374 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/30696d2b-dd70-4eb7-88c1-9bc23b39c07c-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh\" (UID: \"30696d2b-dd70-4eb7-88c1-9bc23b39c07c\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh" Dec 15 05:46:00 crc kubenswrapper[4747]: I1215 05:46:00.475090 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2phv2\" (UniqueName: \"kubernetes.io/projected/30696d2b-dd70-4eb7-88c1-9bc23b39c07c-kube-api-access-2phv2\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh\" (UID: \"30696d2b-dd70-4eb7-88c1-9bc23b39c07c\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh" Dec 15 05:46:00 crc kubenswrapper[4747]: I1215 05:46:00.490878 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh" Dec 15 05:46:00 crc kubenswrapper[4747]: E1215 05:46:00.517915 4747 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh_openshift-marketplace_30696d2b-dd70-4eb7-88c1-9bc23b39c07c_0(a1eddf41a3157434b30a2a3774d6080c8e4a94006246ee5c1c7150f32c6dc5ab): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 15 05:46:00 crc kubenswrapper[4747]: E1215 05:46:00.518035 4747 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh_openshift-marketplace_30696d2b-dd70-4eb7-88c1-9bc23b39c07c_0(a1eddf41a3157434b30a2a3774d6080c8e4a94006246ee5c1c7150f32c6dc5ab): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh" Dec 15 05:46:00 crc kubenswrapper[4747]: E1215 05:46:00.518067 4747 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh_openshift-marketplace_30696d2b-dd70-4eb7-88c1-9bc23b39c07c_0(a1eddf41a3157434b30a2a3774d6080c8e4a94006246ee5c1c7150f32c6dc5ab): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh" Dec 15 05:46:00 crc kubenswrapper[4747]: E1215 05:46:00.518145 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh_openshift-marketplace(30696d2b-dd70-4eb7-88c1-9bc23b39c07c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh_openshift-marketplace(30696d2b-dd70-4eb7-88c1-9bc23b39c07c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh_openshift-marketplace_30696d2b-dd70-4eb7-88c1-9bc23b39c07c_0(a1eddf41a3157434b30a2a3774d6080c8e4a94006246ee5c1c7150f32c6dc5ab): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh" podUID="30696d2b-dd70-4eb7-88c1-9bc23b39c07c" Dec 15 05:46:01 crc kubenswrapper[4747]: I1215 05:46:01.259951 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh" Dec 15 05:46:01 crc kubenswrapper[4747]: I1215 05:46:01.260389 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh" Dec 15 05:46:01 crc kubenswrapper[4747]: E1215 05:46:01.279213 4747 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh_openshift-marketplace_30696d2b-dd70-4eb7-88c1-9bc23b39c07c_0(49ff2b9e5d4e1c0e3885c5bbef6a547aec3d0cda3b3a4149f8e3f9f989273e5b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 15 05:46:01 crc kubenswrapper[4747]: E1215 05:46:01.279271 4747 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh_openshift-marketplace_30696d2b-dd70-4eb7-88c1-9bc23b39c07c_0(49ff2b9e5d4e1c0e3885c5bbef6a547aec3d0cda3b3a4149f8e3f9f989273e5b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh" Dec 15 05:46:01 crc kubenswrapper[4747]: E1215 05:46:01.279299 4747 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh_openshift-marketplace_30696d2b-dd70-4eb7-88c1-9bc23b39c07c_0(49ff2b9e5d4e1c0e3885c5bbef6a547aec3d0cda3b3a4149f8e3f9f989273e5b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh" Dec 15 05:46:01 crc kubenswrapper[4747]: E1215 05:46:01.279352 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh_openshift-marketplace(30696d2b-dd70-4eb7-88c1-9bc23b39c07c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh_openshift-marketplace(30696d2b-dd70-4eb7-88c1-9bc23b39c07c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh_openshift-marketplace_30696d2b-dd70-4eb7-88c1-9bc23b39c07c_0(49ff2b9e5d4e1c0e3885c5bbef6a547aec3d0cda3b3a4149f8e3f9f989273e5b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh" podUID="30696d2b-dd70-4eb7-88c1-9bc23b39c07c" Dec 15 05:46:03 crc kubenswrapper[4747]: I1215 05:46:03.628621 4747 scope.go:117] "RemoveContainer" containerID="eb1f5c773253872e7b72eb3d6d8dfb1affde066a8618f8d9fe96d1cb3254c5e1" Dec 15 05:46:04 crc kubenswrapper[4747]: I1215 05:46:04.282152 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gmfps_89350c5d-9a77-499e-81ec-376b012cc219/kube-multus/2.log" Dec 15 05:46:04 crc kubenswrapper[4747]: I1215 05:46:04.282491 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gmfps" event={"ID":"89350c5d-9a77-499e-81ec-376b012cc219","Type":"ContainerStarted","Data":"aa2048879de878c4132c4765c6dc18d6cea55c228d19adb9953e5855415b14d2"} Dec 15 05:46:04 crc kubenswrapper[4747]: I1215 05:46:04.788647 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-t7qc9" Dec 15 05:46:13 crc kubenswrapper[4747]: I1215 05:46:13.628360 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh" Dec 15 05:46:13 crc kubenswrapper[4747]: I1215 05:46:13.629456 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh" Dec 15 05:46:13 crc kubenswrapper[4747]: I1215 05:46:13.997016 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh"] Dec 15 05:46:14 crc kubenswrapper[4747]: W1215 05:46:14.000593 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30696d2b_dd70_4eb7_88c1_9bc23b39c07c.slice/crio-ce701d391328a75abb54816088cfc458f1bc49500957439188c31e4249d53d24 WatchSource:0}: Error finding container ce701d391328a75abb54816088cfc458f1bc49500957439188c31e4249d53d24: Status 404 returned error can't find the container with id ce701d391328a75abb54816088cfc458f1bc49500957439188c31e4249d53d24 Dec 15 05:46:14 crc kubenswrapper[4747]: I1215 05:46:14.344822 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh" event={"ID":"30696d2b-dd70-4eb7-88c1-9bc23b39c07c","Type":"ContainerStarted","Data":"c394a907bd73e0a146aac9e381c9bc48ea17b0891c665001d577383777817410"} Dec 15 05:46:14 crc kubenswrapper[4747]: I1215 05:46:14.344883 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh" event={"ID":"30696d2b-dd70-4eb7-88c1-9bc23b39c07c","Type":"ContainerStarted","Data":"ce701d391328a75abb54816088cfc458f1bc49500957439188c31e4249d53d24"} Dec 15 05:46:15 crc kubenswrapper[4747]: I1215 05:46:15.353549 4747 generic.go:334] "Generic (PLEG): container finished" podID="30696d2b-dd70-4eb7-88c1-9bc23b39c07c" containerID="c394a907bd73e0a146aac9e381c9bc48ea17b0891c665001d577383777817410" exitCode=0 Dec 15 05:46:15 crc kubenswrapper[4747]: I1215 05:46:15.353610 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh" event={"ID":"30696d2b-dd70-4eb7-88c1-9bc23b39c07c","Type":"ContainerDied","Data":"c394a907bd73e0a146aac9e381c9bc48ea17b0891c665001d577383777817410"} Dec 15 05:46:17 crc kubenswrapper[4747]: I1215 05:46:17.364550 4747 generic.go:334] "Generic (PLEG): container finished" podID="30696d2b-dd70-4eb7-88c1-9bc23b39c07c" containerID="658190d7ecf4965a44eaf406f5abe2d67e212d446846c8c4ea67d18be386a9e5" exitCode=0 Dec 15 05:46:17 crc kubenswrapper[4747]: I1215 05:46:17.364590 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh" event={"ID":"30696d2b-dd70-4eb7-88c1-9bc23b39c07c","Type":"ContainerDied","Data":"658190d7ecf4965a44eaf406f5abe2d67e212d446846c8c4ea67d18be386a9e5"} Dec 15 05:46:18 crc kubenswrapper[4747]: I1215 05:46:18.372904 4747 generic.go:334] "Generic (PLEG): container finished" podID="30696d2b-dd70-4eb7-88c1-9bc23b39c07c" containerID="a8ece5a777987ea182ca4d68c321b67791908118974ae32de2a5fe9c745dfcc9" exitCode=0 Dec 15 05:46:18 crc kubenswrapper[4747]: I1215 05:46:18.372982 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh" event={"ID":"30696d2b-dd70-4eb7-88c1-9bc23b39c07c","Type":"ContainerDied","Data":"a8ece5a777987ea182ca4d68c321b67791908118974ae32de2a5fe9c745dfcc9"} Dec 15 05:46:19 crc kubenswrapper[4747]: I1215 05:46:19.562732 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh" Dec 15 05:46:19 crc kubenswrapper[4747]: I1215 05:46:19.660886 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/30696d2b-dd70-4eb7-88c1-9bc23b39c07c-bundle\") pod \"30696d2b-dd70-4eb7-88c1-9bc23b39c07c\" (UID: \"30696d2b-dd70-4eb7-88c1-9bc23b39c07c\") " Dec 15 05:46:19 crc kubenswrapper[4747]: I1215 05:46:19.661059 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2phv2\" (UniqueName: \"kubernetes.io/projected/30696d2b-dd70-4eb7-88c1-9bc23b39c07c-kube-api-access-2phv2\") pod \"30696d2b-dd70-4eb7-88c1-9bc23b39c07c\" (UID: \"30696d2b-dd70-4eb7-88c1-9bc23b39c07c\") " Dec 15 05:46:19 crc kubenswrapper[4747]: I1215 05:46:19.661097 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/30696d2b-dd70-4eb7-88c1-9bc23b39c07c-util\") pod \"30696d2b-dd70-4eb7-88c1-9bc23b39c07c\" (UID: \"30696d2b-dd70-4eb7-88c1-9bc23b39c07c\") " Dec 15 05:46:19 crc kubenswrapper[4747]: I1215 05:46:19.661881 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30696d2b-dd70-4eb7-88c1-9bc23b39c07c-bundle" (OuterVolumeSpecName: "bundle") pod "30696d2b-dd70-4eb7-88c1-9bc23b39c07c" (UID: "30696d2b-dd70-4eb7-88c1-9bc23b39c07c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:46:19 crc kubenswrapper[4747]: I1215 05:46:19.666996 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30696d2b-dd70-4eb7-88c1-9bc23b39c07c-kube-api-access-2phv2" (OuterVolumeSpecName: "kube-api-access-2phv2") pod "30696d2b-dd70-4eb7-88c1-9bc23b39c07c" (UID: "30696d2b-dd70-4eb7-88c1-9bc23b39c07c"). InnerVolumeSpecName "kube-api-access-2phv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:46:19 crc kubenswrapper[4747]: I1215 05:46:19.669437 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30696d2b-dd70-4eb7-88c1-9bc23b39c07c-util" (OuterVolumeSpecName: "util") pod "30696d2b-dd70-4eb7-88c1-9bc23b39c07c" (UID: "30696d2b-dd70-4eb7-88c1-9bc23b39c07c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:46:19 crc kubenswrapper[4747]: I1215 05:46:19.762956 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2phv2\" (UniqueName: \"kubernetes.io/projected/30696d2b-dd70-4eb7-88c1-9bc23b39c07c-kube-api-access-2phv2\") on node \"crc\" DevicePath \"\"" Dec 15 05:46:19 crc kubenswrapper[4747]: I1215 05:46:19.763011 4747 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/30696d2b-dd70-4eb7-88c1-9bc23b39c07c-util\") on node \"crc\" DevicePath \"\"" Dec 15 05:46:19 crc kubenswrapper[4747]: I1215 05:46:19.763021 4747 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/30696d2b-dd70-4eb7-88c1-9bc23b39c07c-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:46:20 crc kubenswrapper[4747]: I1215 05:46:20.387069 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh" event={"ID":"30696d2b-dd70-4eb7-88c1-9bc23b39c07c","Type":"ContainerDied","Data":"ce701d391328a75abb54816088cfc458f1bc49500957439188c31e4249d53d24"} Dec 15 05:46:20 crc kubenswrapper[4747]: I1215 05:46:20.387122 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce701d391328a75abb54816088cfc458f1bc49500957439188c31e4249d53d24" Dec 15 05:46:20 crc kubenswrapper[4747]: I1215 05:46:20.387443 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh" Dec 15 05:46:26 crc kubenswrapper[4747]: I1215 05:46:26.808436 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-6b7xv"] Dec 15 05:46:26 crc kubenswrapper[4747]: E1215 05:46:26.809149 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30696d2b-dd70-4eb7-88c1-9bc23b39c07c" containerName="util" Dec 15 05:46:26 crc kubenswrapper[4747]: I1215 05:46:26.809164 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="30696d2b-dd70-4eb7-88c1-9bc23b39c07c" containerName="util" Dec 15 05:46:26 crc kubenswrapper[4747]: E1215 05:46:26.809175 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30696d2b-dd70-4eb7-88c1-9bc23b39c07c" containerName="extract" Dec 15 05:46:26 crc kubenswrapper[4747]: I1215 05:46:26.809180 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="30696d2b-dd70-4eb7-88c1-9bc23b39c07c" containerName="extract" Dec 15 05:46:26 crc kubenswrapper[4747]: E1215 05:46:26.809193 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30696d2b-dd70-4eb7-88c1-9bc23b39c07c" containerName="pull" Dec 15 05:46:26 crc kubenswrapper[4747]: I1215 05:46:26.809198 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="30696d2b-dd70-4eb7-88c1-9bc23b39c07c" containerName="pull" Dec 15 05:46:26 crc kubenswrapper[4747]: I1215 05:46:26.809292 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="30696d2b-dd70-4eb7-88c1-9bc23b39c07c" containerName="extract" Dec 15 05:46:26 crc kubenswrapper[4747]: I1215 05:46:26.809693 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-6b7xv" Dec 15 05:46:26 crc kubenswrapper[4747]: I1215 05:46:26.812107 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Dec 15 05:46:26 crc kubenswrapper[4747]: I1215 05:46:26.812216 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-8j9b6" Dec 15 05:46:26 crc kubenswrapper[4747]: I1215 05:46:26.815362 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Dec 15 05:46:26 crc kubenswrapper[4747]: I1215 05:46:26.827398 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-6b7xv"] Dec 15 05:46:26 crc kubenswrapper[4747]: I1215 05:46:26.944966 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pdrq\" (UniqueName: \"kubernetes.io/projected/e1f942fd-4913-456f-b28c-463fd3c2759e-kube-api-access-5pdrq\") pod \"nmstate-operator-6769fb99d-6b7xv\" (UID: \"e1f942fd-4913-456f-b28c-463fd3c2759e\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-6b7xv" Dec 15 05:46:27 crc kubenswrapper[4747]: I1215 05:46:27.046108 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pdrq\" (UniqueName: \"kubernetes.io/projected/e1f942fd-4913-456f-b28c-463fd3c2759e-kube-api-access-5pdrq\") pod \"nmstate-operator-6769fb99d-6b7xv\" (UID: \"e1f942fd-4913-456f-b28c-463fd3c2759e\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-6b7xv" Dec 15 05:46:27 crc kubenswrapper[4747]: I1215 05:46:27.064469 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pdrq\" (UniqueName: \"kubernetes.io/projected/e1f942fd-4913-456f-b28c-463fd3c2759e-kube-api-access-5pdrq\") pod \"nmstate-operator-6769fb99d-6b7xv\" (UID: \"e1f942fd-4913-456f-b28c-463fd3c2759e\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-6b7xv" Dec 15 05:46:27 crc kubenswrapper[4747]: I1215 05:46:27.122830 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-6b7xv" Dec 15 05:46:27 crc kubenswrapper[4747]: I1215 05:46:27.284332 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-6b7xv"] Dec 15 05:46:27 crc kubenswrapper[4747]: I1215 05:46:27.427040 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-6769fb99d-6b7xv" event={"ID":"e1f942fd-4913-456f-b28c-463fd3c2759e","Type":"ContainerStarted","Data":"4753125a86ba5406b53b0875847b6b98d841fa93cbe8d80adcbd6f90c658b5e0"} Dec 15 05:46:30 crc kubenswrapper[4747]: I1215 05:46:30.449287 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-6769fb99d-6b7xv" event={"ID":"e1f942fd-4913-456f-b28c-463fd3c2759e","Type":"ContainerStarted","Data":"4f8c83824da580dee06ee6674c0aeee2f77bcf7b23a5689686afd1e81d8c37d1"} Dec 15 05:46:30 crc kubenswrapper[4747]: I1215 05:46:30.465803 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-6769fb99d-6b7xv" podStartSLOduration=2.391837497 podStartE2EDuration="4.465779324s" podCreationTimestamp="2025-12-15 05:46:26 +0000 UTC" firstStartedPulling="2025-12-15 05:46:27.292726197 +0000 UTC m=+550.989238113" lastFinishedPulling="2025-12-15 05:46:29.366668023 +0000 UTC m=+553.063179940" observedRunningTime="2025-12-15 05:46:30.461301821 +0000 UTC m=+554.157813729" watchObservedRunningTime="2025-12-15 05:46:30.465779324 +0000 UTC m=+554.162291241" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.234392 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-pm75r"] Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.235246 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-pm75r" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.237261 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-64sdp" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.246384 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-pm75r"] Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.249482 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-fns29"] Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.250178 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-f8fb84555-fns29" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.251773 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.253347 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-fns29"] Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.266680 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-4vz6n"] Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.267189 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-4vz6n" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.355397 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-tjfnm"] Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.356063 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-tjfnm" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.357510 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.357616 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-v87xg" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.359647 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.366765 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-tjfnm"] Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.409821 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6q6j\" (UniqueName: \"kubernetes.io/projected/37453676-6389-4503-b1dc-9afdbd759c64-kube-api-access-b6q6j\") pod \"nmstate-webhook-f8fb84555-fns29\" (UID: \"37453676-6389-4503-b1dc-9afdbd759c64\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-fns29" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.409867 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/37453676-6389-4503-b1dc-9afdbd759c64-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-fns29\" (UID: \"37453676-6389-4503-b1dc-9afdbd759c64\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-fns29" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.409893 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/cc636dc4-0911-423b-8327-5b81d759c74a-dbus-socket\") pod \"nmstate-handler-4vz6n\" (UID: \"cc636dc4-0911-423b-8327-5b81d759c74a\") " pod="openshift-nmstate/nmstate-handler-4vz6n" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.410132 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4pqz\" (UniqueName: \"kubernetes.io/projected/b93f5376-bd46-4dc1-82aa-6b1db7622176-kube-api-access-m4pqz\") pod \"nmstate-metrics-7f7f7578db-pm75r\" (UID: \"b93f5376-bd46-4dc1-82aa-6b1db7622176\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-pm75r" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.410195 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/cc636dc4-0911-423b-8327-5b81d759c74a-nmstate-lock\") pod \"nmstate-handler-4vz6n\" (UID: \"cc636dc4-0911-423b-8327-5b81d759c74a\") " pod="openshift-nmstate/nmstate-handler-4vz6n" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.410235 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2vvv\" (UniqueName: \"kubernetes.io/projected/cc636dc4-0911-423b-8327-5b81d759c74a-kube-api-access-x2vvv\") pod \"nmstate-handler-4vz6n\" (UID: \"cc636dc4-0911-423b-8327-5b81d759c74a\") " pod="openshift-nmstate/nmstate-handler-4vz6n" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.410282 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/cc636dc4-0911-423b-8327-5b81d759c74a-ovs-socket\") pod \"nmstate-handler-4vz6n\" (UID: \"cc636dc4-0911-423b-8327-5b81d759c74a\") " pod="openshift-nmstate/nmstate-handler-4vz6n" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.511905 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6q6j\" (UniqueName: \"kubernetes.io/projected/37453676-6389-4503-b1dc-9afdbd759c64-kube-api-access-b6q6j\") pod \"nmstate-webhook-f8fb84555-fns29\" (UID: \"37453676-6389-4503-b1dc-9afdbd759c64\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-fns29" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.512336 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8f1ea057-6f84-40ec-be2e-54583b3af99b-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-tjfnm\" (UID: \"8f1ea057-6f84-40ec-be2e-54583b3af99b\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-tjfnm" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.512419 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/37453676-6389-4503-b1dc-9afdbd759c64-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-fns29\" (UID: \"37453676-6389-4503-b1dc-9afdbd759c64\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-fns29" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.512447 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/cc636dc4-0911-423b-8327-5b81d759c74a-dbus-socket\") pod \"nmstate-handler-4vz6n\" (UID: \"cc636dc4-0911-423b-8327-5b81d759c74a\") " pod="openshift-nmstate/nmstate-handler-4vz6n" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.512859 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/cc636dc4-0911-423b-8327-5b81d759c74a-dbus-socket\") pod \"nmstate-handler-4vz6n\" (UID: \"cc636dc4-0911-423b-8327-5b81d759c74a\") " pod="openshift-nmstate/nmstate-handler-4vz6n" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.513377 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfg56\" (UniqueName: \"kubernetes.io/projected/8f1ea057-6f84-40ec-be2e-54583b3af99b-kube-api-access-lfg56\") pod \"nmstate-console-plugin-6ff7998486-tjfnm\" (UID: \"8f1ea057-6f84-40ec-be2e-54583b3af99b\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-tjfnm" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.513697 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4pqz\" (UniqueName: \"kubernetes.io/projected/b93f5376-bd46-4dc1-82aa-6b1db7622176-kube-api-access-m4pqz\") pod \"nmstate-metrics-7f7f7578db-pm75r\" (UID: \"b93f5376-bd46-4dc1-82aa-6b1db7622176\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-pm75r" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.513731 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/cc636dc4-0911-423b-8327-5b81d759c74a-nmstate-lock\") pod \"nmstate-handler-4vz6n\" (UID: \"cc636dc4-0911-423b-8327-5b81d759c74a\") " pod="openshift-nmstate/nmstate-handler-4vz6n" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.513750 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2vvv\" (UniqueName: \"kubernetes.io/projected/cc636dc4-0911-423b-8327-5b81d759c74a-kube-api-access-x2vvv\") pod \"nmstate-handler-4vz6n\" (UID: \"cc636dc4-0911-423b-8327-5b81d759c74a\") " pod="openshift-nmstate/nmstate-handler-4vz6n" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.514044 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/cc636dc4-0911-423b-8327-5b81d759c74a-ovs-socket\") pod \"nmstate-handler-4vz6n\" (UID: \"cc636dc4-0911-423b-8327-5b81d759c74a\") " pod="openshift-nmstate/nmstate-handler-4vz6n" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.513832 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/cc636dc4-0911-423b-8327-5b81d759c74a-nmstate-lock\") pod \"nmstate-handler-4vz6n\" (UID: \"cc636dc4-0911-423b-8327-5b81d759c74a\") " pod="openshift-nmstate/nmstate-handler-4vz6n" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.514125 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8f1ea057-6f84-40ec-be2e-54583b3af99b-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-tjfnm\" (UID: \"8f1ea057-6f84-40ec-be2e-54583b3af99b\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-tjfnm" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.514175 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/cc636dc4-0911-423b-8327-5b81d759c74a-ovs-socket\") pod \"nmstate-handler-4vz6n\" (UID: \"cc636dc4-0911-423b-8327-5b81d759c74a\") " pod="openshift-nmstate/nmstate-handler-4vz6n" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.518122 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/37453676-6389-4503-b1dc-9afdbd759c64-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-fns29\" (UID: \"37453676-6389-4503-b1dc-9afdbd759c64\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-fns29" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.523548 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-56468cbb4-6ks5l"] Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.524217 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-56468cbb4-6ks5l" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.529574 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6q6j\" (UniqueName: \"kubernetes.io/projected/37453676-6389-4503-b1dc-9afdbd759c64-kube-api-access-b6q6j\") pod \"nmstate-webhook-f8fb84555-fns29\" (UID: \"37453676-6389-4503-b1dc-9afdbd759c64\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-fns29" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.530436 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2vvv\" (UniqueName: \"kubernetes.io/projected/cc636dc4-0911-423b-8327-5b81d759c74a-kube-api-access-x2vvv\") pod \"nmstate-handler-4vz6n\" (UID: \"cc636dc4-0911-423b-8327-5b81d759c74a\") " pod="openshift-nmstate/nmstate-handler-4vz6n" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.530985 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4pqz\" (UniqueName: \"kubernetes.io/projected/b93f5376-bd46-4dc1-82aa-6b1db7622176-kube-api-access-m4pqz\") pod \"nmstate-metrics-7f7f7578db-pm75r\" (UID: \"b93f5376-bd46-4dc1-82aa-6b1db7622176\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-pm75r" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.540075 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-56468cbb4-6ks5l"] Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.550481 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-pm75r" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.560316 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-f8fb84555-fns29" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.578811 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-4vz6n" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.615310 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfg56\" (UniqueName: \"kubernetes.io/projected/8f1ea057-6f84-40ec-be2e-54583b3af99b-kube-api-access-lfg56\") pod \"nmstate-console-plugin-6ff7998486-tjfnm\" (UID: \"8f1ea057-6f84-40ec-be2e-54583b3af99b\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-tjfnm" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.615405 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8f1ea057-6f84-40ec-be2e-54583b3af99b-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-tjfnm\" (UID: \"8f1ea057-6f84-40ec-be2e-54583b3af99b\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-tjfnm" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.615439 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8f1ea057-6f84-40ec-be2e-54583b3af99b-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-tjfnm\" (UID: \"8f1ea057-6f84-40ec-be2e-54583b3af99b\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-tjfnm" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.616559 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8f1ea057-6f84-40ec-be2e-54583b3af99b-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-tjfnm\" (UID: \"8f1ea057-6f84-40ec-be2e-54583b3af99b\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-tjfnm" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.628343 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8f1ea057-6f84-40ec-be2e-54583b3af99b-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-tjfnm\" (UID: \"8f1ea057-6f84-40ec-be2e-54583b3af99b\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-tjfnm" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.632783 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfg56\" (UniqueName: \"kubernetes.io/projected/8f1ea057-6f84-40ec-be2e-54583b3af99b-kube-api-access-lfg56\") pod \"nmstate-console-plugin-6ff7998486-tjfnm\" (UID: \"8f1ea057-6f84-40ec-be2e-54583b3af99b\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-tjfnm" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.667467 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-tjfnm" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.725388 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-pm75r"] Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.728102 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/867bfba3-74f2-459d-b988-d5deb5356d66-console-oauth-config\") pod \"console-56468cbb4-6ks5l\" (UID: \"867bfba3-74f2-459d-b988-d5deb5356d66\") " pod="openshift-console/console-56468cbb4-6ks5l" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.728155 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/867bfba3-74f2-459d-b988-d5deb5356d66-trusted-ca-bundle\") pod \"console-56468cbb4-6ks5l\" (UID: \"867bfba3-74f2-459d-b988-d5deb5356d66\") " pod="openshift-console/console-56468cbb4-6ks5l" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.728221 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/867bfba3-74f2-459d-b988-d5deb5356d66-console-config\") pod \"console-56468cbb4-6ks5l\" (UID: \"867bfba3-74f2-459d-b988-d5deb5356d66\") " pod="openshift-console/console-56468cbb4-6ks5l" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.728258 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/867bfba3-74f2-459d-b988-d5deb5356d66-oauth-serving-cert\") pod \"console-56468cbb4-6ks5l\" (UID: \"867bfba3-74f2-459d-b988-d5deb5356d66\") " pod="openshift-console/console-56468cbb4-6ks5l" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.728440 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpv4b\" (UniqueName: \"kubernetes.io/projected/867bfba3-74f2-459d-b988-d5deb5356d66-kube-api-access-qpv4b\") pod \"console-56468cbb4-6ks5l\" (UID: \"867bfba3-74f2-459d-b988-d5deb5356d66\") " pod="openshift-console/console-56468cbb4-6ks5l" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.728473 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/867bfba3-74f2-459d-b988-d5deb5356d66-console-serving-cert\") pod \"console-56468cbb4-6ks5l\" (UID: \"867bfba3-74f2-459d-b988-d5deb5356d66\") " pod="openshift-console/console-56468cbb4-6ks5l" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.728495 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/867bfba3-74f2-459d-b988-d5deb5356d66-service-ca\") pod \"console-56468cbb4-6ks5l\" (UID: \"867bfba3-74f2-459d-b988-d5deb5356d66\") " pod="openshift-console/console-56468cbb4-6ks5l" Dec 15 05:46:31 crc kubenswrapper[4747]: W1215 05:46:31.737798 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb93f5376_bd46_4dc1_82aa_6b1db7622176.slice/crio-a315942b28512df69b4647e7dc91035c5c719463f41c2512557ca34449ec7bcc WatchSource:0}: Error finding container a315942b28512df69b4647e7dc91035c5c719463f41c2512557ca34449ec7bcc: Status 404 returned error can't find the container with id a315942b28512df69b4647e7dc91035c5c719463f41c2512557ca34449ec7bcc Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.747514 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-fns29"] Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.829068 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/867bfba3-74f2-459d-b988-d5deb5356d66-console-config\") pod \"console-56468cbb4-6ks5l\" (UID: \"867bfba3-74f2-459d-b988-d5deb5356d66\") " pod="openshift-console/console-56468cbb4-6ks5l" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.829128 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/867bfba3-74f2-459d-b988-d5deb5356d66-oauth-serving-cert\") pod \"console-56468cbb4-6ks5l\" (UID: \"867bfba3-74f2-459d-b988-d5deb5356d66\") " pod="openshift-console/console-56468cbb4-6ks5l" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.829427 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpv4b\" (UniqueName: \"kubernetes.io/projected/867bfba3-74f2-459d-b988-d5deb5356d66-kube-api-access-qpv4b\") pod \"console-56468cbb4-6ks5l\" (UID: \"867bfba3-74f2-459d-b988-d5deb5356d66\") " pod="openshift-console/console-56468cbb4-6ks5l" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.829465 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/867bfba3-74f2-459d-b988-d5deb5356d66-console-serving-cert\") pod \"console-56468cbb4-6ks5l\" (UID: \"867bfba3-74f2-459d-b988-d5deb5356d66\") " pod="openshift-console/console-56468cbb4-6ks5l" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.829485 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/867bfba3-74f2-459d-b988-d5deb5356d66-service-ca\") pod \"console-56468cbb4-6ks5l\" (UID: \"867bfba3-74f2-459d-b988-d5deb5356d66\") " pod="openshift-console/console-56468cbb4-6ks5l" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.830056 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/867bfba3-74f2-459d-b988-d5deb5356d66-console-oauth-config\") pod \"console-56468cbb4-6ks5l\" (UID: \"867bfba3-74f2-459d-b988-d5deb5356d66\") " pod="openshift-console/console-56468cbb4-6ks5l" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.830433 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/867bfba3-74f2-459d-b988-d5deb5356d66-trusted-ca-bundle\") pod \"console-56468cbb4-6ks5l\" (UID: \"867bfba3-74f2-459d-b988-d5deb5356d66\") " pod="openshift-console/console-56468cbb4-6ks5l" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.830574 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/867bfba3-74f2-459d-b988-d5deb5356d66-console-config\") pod \"console-56468cbb4-6ks5l\" (UID: \"867bfba3-74f2-459d-b988-d5deb5356d66\") " pod="openshift-console/console-56468cbb4-6ks5l" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.831745 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/867bfba3-74f2-459d-b988-d5deb5356d66-oauth-serving-cert\") pod \"console-56468cbb4-6ks5l\" (UID: \"867bfba3-74f2-459d-b988-d5deb5356d66\") " pod="openshift-console/console-56468cbb4-6ks5l" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.832626 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/867bfba3-74f2-459d-b988-d5deb5356d66-trusted-ca-bundle\") pod \"console-56468cbb4-6ks5l\" (UID: \"867bfba3-74f2-459d-b988-d5deb5356d66\") " pod="openshift-console/console-56468cbb4-6ks5l" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.835425 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/867bfba3-74f2-459d-b988-d5deb5356d66-console-oauth-config\") pod \"console-56468cbb4-6ks5l\" (UID: \"867bfba3-74f2-459d-b988-d5deb5356d66\") " pod="openshift-console/console-56468cbb4-6ks5l" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.836012 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/867bfba3-74f2-459d-b988-d5deb5356d66-service-ca\") pod \"console-56468cbb4-6ks5l\" (UID: \"867bfba3-74f2-459d-b988-d5deb5356d66\") " pod="openshift-console/console-56468cbb4-6ks5l" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.837827 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/867bfba3-74f2-459d-b988-d5deb5356d66-console-serving-cert\") pod \"console-56468cbb4-6ks5l\" (UID: \"867bfba3-74f2-459d-b988-d5deb5356d66\") " pod="openshift-console/console-56468cbb4-6ks5l" Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.844992 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpv4b\" (UniqueName: \"kubernetes.io/projected/867bfba3-74f2-459d-b988-d5deb5356d66-kube-api-access-qpv4b\") pod \"console-56468cbb4-6ks5l\" (UID: \"867bfba3-74f2-459d-b988-d5deb5356d66\") " pod="openshift-console/console-56468cbb4-6ks5l" Dec 15 05:46:31 crc kubenswrapper[4747]: W1215 05:46:31.847026 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f1ea057_6f84_40ec_be2e_54583b3af99b.slice/crio-96fd6ce59213fbca26b811c782e5c0076a0485a3ba844b933a3d845379d83a4a WatchSource:0}: Error finding container 96fd6ce59213fbca26b811c782e5c0076a0485a3ba844b933a3d845379d83a4a: Status 404 returned error can't find the container with id 96fd6ce59213fbca26b811c782e5c0076a0485a3ba844b933a3d845379d83a4a Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.848507 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-tjfnm"] Dec 15 05:46:31 crc kubenswrapper[4747]: I1215 05:46:31.901997 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-56468cbb4-6ks5l" Dec 15 05:46:32 crc kubenswrapper[4747]: I1215 05:46:32.069606 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-56468cbb4-6ks5l"] Dec 15 05:46:32 crc kubenswrapper[4747]: I1215 05:46:32.466494 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-56468cbb4-6ks5l" event={"ID":"867bfba3-74f2-459d-b988-d5deb5356d66","Type":"ContainerStarted","Data":"a38622f058fe4598cd7f57d2092286b63ff4a4fbabb7eb769486f1ddc030adb5"} Dec 15 05:46:32 crc kubenswrapper[4747]: I1215 05:46:32.466585 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-56468cbb4-6ks5l" event={"ID":"867bfba3-74f2-459d-b988-d5deb5356d66","Type":"ContainerStarted","Data":"36f3b79b5fa9ecfbbf37bdbeb0ba930ce994530d49eb816a1684f74bf9db8710"} Dec 15 05:46:32 crc kubenswrapper[4747]: I1215 05:46:32.467899 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-tjfnm" event={"ID":"8f1ea057-6f84-40ec-be2e-54583b3af99b","Type":"ContainerStarted","Data":"96fd6ce59213fbca26b811c782e5c0076a0485a3ba844b933a3d845379d83a4a"} Dec 15 05:46:32 crc kubenswrapper[4747]: I1215 05:46:32.469840 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-4vz6n" event={"ID":"cc636dc4-0911-423b-8327-5b81d759c74a","Type":"ContainerStarted","Data":"27137cbf0ff25b75d905a18d45660115d853507c5c743dd4e7fd1d285ec293e9"} Dec 15 05:46:32 crc kubenswrapper[4747]: I1215 05:46:32.471300 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-f8fb84555-fns29" event={"ID":"37453676-6389-4503-b1dc-9afdbd759c64","Type":"ContainerStarted","Data":"beb9437fe8eceb9b332556969cd2a896ecb8ab0a6f6a67b50e941d103dbc5bbb"} Dec 15 05:46:32 crc kubenswrapper[4747]: I1215 05:46:32.472839 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-pm75r" event={"ID":"b93f5376-bd46-4dc1-82aa-6b1db7622176","Type":"ContainerStarted","Data":"a315942b28512df69b4647e7dc91035c5c719463f41c2512557ca34449ec7bcc"} Dec 15 05:46:32 crc kubenswrapper[4747]: I1215 05:46:32.485265 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-56468cbb4-6ks5l" podStartSLOduration=1.485245531 podStartE2EDuration="1.485245531s" podCreationTimestamp="2025-12-15 05:46:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:46:32.48310758 +0000 UTC m=+556.179619497" watchObservedRunningTime="2025-12-15 05:46:32.485245531 +0000 UTC m=+556.181757448" Dec 15 05:46:35 crc kubenswrapper[4747]: I1215 05:46:35.493480 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-tjfnm" event={"ID":"8f1ea057-6f84-40ec-be2e-54583b3af99b","Type":"ContainerStarted","Data":"e307588f57111872042348fba9e2ce42d0f9caff7845712a18b37433b8f9e4d3"} Dec 15 05:46:35 crc kubenswrapper[4747]: I1215 05:46:35.495971 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-4vz6n" event={"ID":"cc636dc4-0911-423b-8327-5b81d759c74a","Type":"ContainerStarted","Data":"75f862f5738b069e252072f549b033c9b57b89431e25dba7651000e7d22db763"} Dec 15 05:46:35 crc kubenswrapper[4747]: I1215 05:46:35.496420 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-4vz6n" Dec 15 05:46:35 crc kubenswrapper[4747]: I1215 05:46:35.498550 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-f8fb84555-fns29" event={"ID":"37453676-6389-4503-b1dc-9afdbd759c64","Type":"ContainerStarted","Data":"51c74ed6eae13976d1894b021246ad51793f4b8db16203d895ff9fe7ad770011"} Dec 15 05:46:35 crc kubenswrapper[4747]: I1215 05:46:35.498644 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-f8fb84555-fns29" Dec 15 05:46:35 crc kubenswrapper[4747]: I1215 05:46:35.500578 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-pm75r" event={"ID":"b93f5376-bd46-4dc1-82aa-6b1db7622176","Type":"ContainerStarted","Data":"76de8acd0d5adba14aac9cab2101bc4240f96ff7433d6d92fb9fa4e89313ad8b"} Dec 15 05:46:35 crc kubenswrapper[4747]: I1215 05:46:35.511834 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-tjfnm" podStartSLOduration=1.756245565 podStartE2EDuration="4.511803059s" podCreationTimestamp="2025-12-15 05:46:31 +0000 UTC" firstStartedPulling="2025-12-15 05:46:31.849993054 +0000 UTC m=+555.546504972" lastFinishedPulling="2025-12-15 05:46:34.605550538 +0000 UTC m=+558.302062466" observedRunningTime="2025-12-15 05:46:35.50738575 +0000 UTC m=+559.203897667" watchObservedRunningTime="2025-12-15 05:46:35.511803059 +0000 UTC m=+559.208314976" Dec 15 05:46:35 crc kubenswrapper[4747]: I1215 05:46:35.527077 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-f8fb84555-fns29" podStartSLOduration=1.6915560429999998 podStartE2EDuration="4.527061119s" podCreationTimestamp="2025-12-15 05:46:31 +0000 UTC" firstStartedPulling="2025-12-15 05:46:31.774299445 +0000 UTC m=+555.470811361" lastFinishedPulling="2025-12-15 05:46:34.609804519 +0000 UTC m=+558.306316437" observedRunningTime="2025-12-15 05:46:35.525561839 +0000 UTC m=+559.222073756" watchObservedRunningTime="2025-12-15 05:46:35.527061119 +0000 UTC m=+559.223573036" Dec 15 05:46:35 crc kubenswrapper[4747]: I1215 05:46:35.538198 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-4vz6n" podStartSLOduration=1.5275157419999998 podStartE2EDuration="4.538185574s" podCreationTimestamp="2025-12-15 05:46:31 +0000 UTC" firstStartedPulling="2025-12-15 05:46:31.603600748 +0000 UTC m=+555.300112665" lastFinishedPulling="2025-12-15 05:46:34.614270581 +0000 UTC m=+558.310782497" observedRunningTime="2025-12-15 05:46:35.537203898 +0000 UTC m=+559.233715815" watchObservedRunningTime="2025-12-15 05:46:35.538185574 +0000 UTC m=+559.234697491" Dec 15 05:46:37 crc kubenswrapper[4747]: I1215 05:46:37.519660 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-pm75r" event={"ID":"b93f5376-bd46-4dc1-82aa-6b1db7622176","Type":"ContainerStarted","Data":"675165edb1eec7410d2f922456be74b6a70a5698e09c90050065b752ddd4d2ce"} Dec 15 05:46:37 crc kubenswrapper[4747]: I1215 05:46:37.536011 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-pm75r" podStartSLOduration=1.255004064 podStartE2EDuration="6.5359923s" podCreationTimestamp="2025-12-15 05:46:31 +0000 UTC" firstStartedPulling="2025-12-15 05:46:31.739823916 +0000 UTC m=+555.436335832" lastFinishedPulling="2025-12-15 05:46:37.020812151 +0000 UTC m=+560.717324068" observedRunningTime="2025-12-15 05:46:37.531727208 +0000 UTC m=+561.228239125" watchObservedRunningTime="2025-12-15 05:46:37.5359923 +0000 UTC m=+561.232504217" Dec 15 05:46:41 crc kubenswrapper[4747]: I1215 05:46:41.601240 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-4vz6n" Dec 15 05:46:41 crc kubenswrapper[4747]: I1215 05:46:41.902104 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-56468cbb4-6ks5l" Dec 15 05:46:41 crc kubenswrapper[4747]: I1215 05:46:41.902166 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-56468cbb4-6ks5l" Dec 15 05:46:41 crc kubenswrapper[4747]: I1215 05:46:41.908290 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-56468cbb4-6ks5l" Dec 15 05:46:42 crc kubenswrapper[4747]: I1215 05:46:42.554664 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-56468cbb4-6ks5l" Dec 15 05:46:42 crc kubenswrapper[4747]: I1215 05:46:42.609514 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-2sdgk"] Dec 15 05:46:51 crc kubenswrapper[4747]: I1215 05:46:51.567458 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-f8fb84555-fns29" Dec 15 05:46:58 crc kubenswrapper[4747]: I1215 05:46:58.865168 4747 patch_prober.go:28] interesting pod/machine-config-daemon-nldtn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 15 05:46:58 crc kubenswrapper[4747]: I1215 05:46:58.865878 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 15 05:47:02 crc kubenswrapper[4747]: I1215 05:47:02.977547 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4sdz5f"] Dec 15 05:47:02 crc kubenswrapper[4747]: I1215 05:47:02.979781 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4sdz5f" Dec 15 05:47:02 crc kubenswrapper[4747]: I1215 05:47:02.982701 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 15 05:47:02 crc kubenswrapper[4747]: I1215 05:47:02.992236 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4sdz5f"] Dec 15 05:47:03 crc kubenswrapper[4747]: I1215 05:47:03.116206 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5049975b-44d3-44ef-98d8-94691dcb042f-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4sdz5f\" (UID: \"5049975b-44d3-44ef-98d8-94691dcb042f\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4sdz5f" Dec 15 05:47:03 crc kubenswrapper[4747]: I1215 05:47:03.116263 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbcsh\" (UniqueName: \"kubernetes.io/projected/5049975b-44d3-44ef-98d8-94691dcb042f-kube-api-access-vbcsh\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4sdz5f\" (UID: \"5049975b-44d3-44ef-98d8-94691dcb042f\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4sdz5f" Dec 15 05:47:03 crc kubenswrapper[4747]: I1215 05:47:03.116304 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5049975b-44d3-44ef-98d8-94691dcb042f-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4sdz5f\" (UID: \"5049975b-44d3-44ef-98d8-94691dcb042f\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4sdz5f" Dec 15 05:47:03 crc kubenswrapper[4747]: I1215 05:47:03.217273 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5049975b-44d3-44ef-98d8-94691dcb042f-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4sdz5f\" (UID: \"5049975b-44d3-44ef-98d8-94691dcb042f\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4sdz5f" Dec 15 05:47:03 crc kubenswrapper[4747]: I1215 05:47:03.217342 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbcsh\" (UniqueName: \"kubernetes.io/projected/5049975b-44d3-44ef-98d8-94691dcb042f-kube-api-access-vbcsh\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4sdz5f\" (UID: \"5049975b-44d3-44ef-98d8-94691dcb042f\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4sdz5f" Dec 15 05:47:03 crc kubenswrapper[4747]: I1215 05:47:03.217384 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5049975b-44d3-44ef-98d8-94691dcb042f-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4sdz5f\" (UID: \"5049975b-44d3-44ef-98d8-94691dcb042f\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4sdz5f" Dec 15 05:47:03 crc kubenswrapper[4747]: I1215 05:47:03.217808 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5049975b-44d3-44ef-98d8-94691dcb042f-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4sdz5f\" (UID: \"5049975b-44d3-44ef-98d8-94691dcb042f\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4sdz5f" Dec 15 05:47:03 crc kubenswrapper[4747]: I1215 05:47:03.218469 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5049975b-44d3-44ef-98d8-94691dcb042f-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4sdz5f\" (UID: \"5049975b-44d3-44ef-98d8-94691dcb042f\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4sdz5f" Dec 15 05:47:03 crc kubenswrapper[4747]: I1215 05:47:03.232442 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbcsh\" (UniqueName: \"kubernetes.io/projected/5049975b-44d3-44ef-98d8-94691dcb042f-kube-api-access-vbcsh\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4sdz5f\" (UID: \"5049975b-44d3-44ef-98d8-94691dcb042f\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4sdz5f" Dec 15 05:47:03 crc kubenswrapper[4747]: I1215 05:47:03.295413 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4sdz5f" Dec 15 05:47:03 crc kubenswrapper[4747]: I1215 05:47:03.665317 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4sdz5f"] Dec 15 05:47:04 crc kubenswrapper[4747]: I1215 05:47:04.684577 4747 generic.go:334] "Generic (PLEG): container finished" podID="5049975b-44d3-44ef-98d8-94691dcb042f" containerID="2c873e19bf70c2a868d7e8b50645097b309e4e716d4e6e5f191650ca2ab14929" exitCode=0 Dec 15 05:47:04 crc kubenswrapper[4747]: I1215 05:47:04.684769 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4sdz5f" event={"ID":"5049975b-44d3-44ef-98d8-94691dcb042f","Type":"ContainerDied","Data":"2c873e19bf70c2a868d7e8b50645097b309e4e716d4e6e5f191650ca2ab14929"} Dec 15 05:47:04 crc kubenswrapper[4747]: I1215 05:47:04.684888 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4sdz5f" event={"ID":"5049975b-44d3-44ef-98d8-94691dcb042f","Type":"ContainerStarted","Data":"842958d31d22a0c12d06c7d8b70dd6b5456da828b505165ea24c62d2b84dda63"} Dec 15 05:47:06 crc kubenswrapper[4747]: I1215 05:47:06.697509 4747 generic.go:334] "Generic (PLEG): container finished" podID="5049975b-44d3-44ef-98d8-94691dcb042f" containerID="bc8b8afd90eed6a138aca1549d6b7e40820f9ad94c9c3e7d80cae9e4c4e9ad97" exitCode=0 Dec 15 05:47:06 crc kubenswrapper[4747]: I1215 05:47:06.697641 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4sdz5f" event={"ID":"5049975b-44d3-44ef-98d8-94691dcb042f","Type":"ContainerDied","Data":"bc8b8afd90eed6a138aca1549d6b7e40820f9ad94c9c3e7d80cae9e4c4e9ad97"} Dec 15 05:47:07 crc kubenswrapper[4747]: I1215 05:47:07.640496 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-2sdgk" podUID="17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f" containerName="console" containerID="cri-o://9fe68dd80b9e33d6afbdccb3b8f3ff4ae50a93efc8283dcd2035d69533a62c33" gracePeriod=15 Dec 15 05:47:07 crc kubenswrapper[4747]: I1215 05:47:07.706118 4747 generic.go:334] "Generic (PLEG): container finished" podID="5049975b-44d3-44ef-98d8-94691dcb042f" containerID="1beea77bf3f722946f91f94fa5941d5dff2e1e8783e106d5713178a9858b079d" exitCode=0 Dec 15 05:47:07 crc kubenswrapper[4747]: I1215 05:47:07.706179 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4sdz5f" event={"ID":"5049975b-44d3-44ef-98d8-94691dcb042f","Type":"ContainerDied","Data":"1beea77bf3f722946f91f94fa5941d5dff2e1e8783e106d5713178a9858b079d"} Dec 15 05:47:07 crc kubenswrapper[4747]: I1215 05:47:07.961816 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-2sdgk_17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f/console/0.log" Dec 15 05:47:07 crc kubenswrapper[4747]: I1215 05:47:07.962178 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-2sdgk" Dec 15 05:47:08 crc kubenswrapper[4747]: I1215 05:47:08.077629 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f-oauth-serving-cert\") pod \"17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f\" (UID: \"17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f\") " Dec 15 05:47:08 crc kubenswrapper[4747]: I1215 05:47:08.077691 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f-console-oauth-config\") pod \"17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f\" (UID: \"17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f\") " Dec 15 05:47:08 crc kubenswrapper[4747]: I1215 05:47:08.077767 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qrfq\" (UniqueName: \"kubernetes.io/projected/17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f-kube-api-access-9qrfq\") pod \"17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f\" (UID: \"17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f\") " Dec 15 05:47:08 crc kubenswrapper[4747]: I1215 05:47:08.077798 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f-trusted-ca-bundle\") pod \"17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f\" (UID: \"17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f\") " Dec 15 05:47:08 crc kubenswrapper[4747]: I1215 05:47:08.077821 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f-service-ca\") pod \"17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f\" (UID: \"17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f\") " Dec 15 05:47:08 crc kubenswrapper[4747]: I1215 05:47:08.077851 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f-console-serving-cert\") pod \"17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f\" (UID: \"17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f\") " Dec 15 05:47:08 crc kubenswrapper[4747]: I1215 05:47:08.077880 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f-console-config\") pod \"17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f\" (UID: \"17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f\") " Dec 15 05:47:08 crc kubenswrapper[4747]: I1215 05:47:08.079036 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f-console-config" (OuterVolumeSpecName: "console-config") pod "17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f" (UID: "17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:47:08 crc kubenswrapper[4747]: I1215 05:47:08.079054 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f" (UID: "17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:47:08 crc kubenswrapper[4747]: I1215 05:47:08.079097 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f" (UID: "17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:47:08 crc kubenswrapper[4747]: I1215 05:47:08.079325 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f-service-ca" (OuterVolumeSpecName: "service-ca") pod "17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f" (UID: "17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:47:08 crc kubenswrapper[4747]: I1215 05:47:08.084640 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f" (UID: "17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:47:08 crc kubenswrapper[4747]: I1215 05:47:08.084789 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f" (UID: "17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:47:08 crc kubenswrapper[4747]: I1215 05:47:08.085232 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f-kube-api-access-9qrfq" (OuterVolumeSpecName: "kube-api-access-9qrfq") pod "17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f" (UID: "17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f"). InnerVolumeSpecName "kube-api-access-9qrfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:47:08 crc kubenswrapper[4747]: I1215 05:47:08.179496 4747 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 15 05:47:08 crc kubenswrapper[4747]: I1215 05:47:08.179530 4747 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:47:08 crc kubenswrapper[4747]: I1215 05:47:08.179540 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qrfq\" (UniqueName: \"kubernetes.io/projected/17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f-kube-api-access-9qrfq\") on node \"crc\" DevicePath \"\"" Dec 15 05:47:08 crc kubenswrapper[4747]: I1215 05:47:08.179552 4747 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:47:08 crc kubenswrapper[4747]: I1215 05:47:08.179564 4747 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f-service-ca\") on node \"crc\" DevicePath \"\"" Dec 15 05:47:08 crc kubenswrapper[4747]: I1215 05:47:08.179581 4747 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 15 05:47:08 crc kubenswrapper[4747]: I1215 05:47:08.179589 4747 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f-console-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:47:08 crc kubenswrapper[4747]: I1215 05:47:08.712712 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-2sdgk_17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f/console/0.log" Dec 15 05:47:08 crc kubenswrapper[4747]: I1215 05:47:08.712767 4747 generic.go:334] "Generic (PLEG): container finished" podID="17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f" containerID="9fe68dd80b9e33d6afbdccb3b8f3ff4ae50a93efc8283dcd2035d69533a62c33" exitCode=2 Dec 15 05:47:08 crc kubenswrapper[4747]: I1215 05:47:08.712822 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-2sdgk" Dec 15 05:47:08 crc kubenswrapper[4747]: I1215 05:47:08.712854 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-2sdgk" event={"ID":"17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f","Type":"ContainerDied","Data":"9fe68dd80b9e33d6afbdccb3b8f3ff4ae50a93efc8283dcd2035d69533a62c33"} Dec 15 05:47:08 crc kubenswrapper[4747]: I1215 05:47:08.712882 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-2sdgk" event={"ID":"17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f","Type":"ContainerDied","Data":"d65f1f8d541e97790a8eb2f4b7f75e5e8159a795c1d56f5e6e2202b757c02f6b"} Dec 15 05:47:08 crc kubenswrapper[4747]: I1215 05:47:08.712905 4747 scope.go:117] "RemoveContainer" containerID="9fe68dd80b9e33d6afbdccb3b8f3ff4ae50a93efc8283dcd2035d69533a62c33" Dec 15 05:47:08 crc kubenswrapper[4747]: I1215 05:47:08.728282 4747 scope.go:117] "RemoveContainer" containerID="9fe68dd80b9e33d6afbdccb3b8f3ff4ae50a93efc8283dcd2035d69533a62c33" Dec 15 05:47:08 crc kubenswrapper[4747]: E1215 05:47:08.728673 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fe68dd80b9e33d6afbdccb3b8f3ff4ae50a93efc8283dcd2035d69533a62c33\": container with ID starting with 9fe68dd80b9e33d6afbdccb3b8f3ff4ae50a93efc8283dcd2035d69533a62c33 not found: ID does not exist" containerID="9fe68dd80b9e33d6afbdccb3b8f3ff4ae50a93efc8283dcd2035d69533a62c33" Dec 15 05:47:08 crc kubenswrapper[4747]: I1215 05:47:08.728704 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fe68dd80b9e33d6afbdccb3b8f3ff4ae50a93efc8283dcd2035d69533a62c33"} err="failed to get container status \"9fe68dd80b9e33d6afbdccb3b8f3ff4ae50a93efc8283dcd2035d69533a62c33\": rpc error: code = NotFound desc = could not find container \"9fe68dd80b9e33d6afbdccb3b8f3ff4ae50a93efc8283dcd2035d69533a62c33\": container with ID starting with 9fe68dd80b9e33d6afbdccb3b8f3ff4ae50a93efc8283dcd2035d69533a62c33 not found: ID does not exist" Dec 15 05:47:08 crc kubenswrapper[4747]: I1215 05:47:08.729711 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-2sdgk"] Dec 15 05:47:08 crc kubenswrapper[4747]: I1215 05:47:08.739798 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-2sdgk"] Dec 15 05:47:08 crc kubenswrapper[4747]: I1215 05:47:08.899268 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4sdz5f" Dec 15 05:47:08 crc kubenswrapper[4747]: I1215 05:47:08.989550 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbcsh\" (UniqueName: \"kubernetes.io/projected/5049975b-44d3-44ef-98d8-94691dcb042f-kube-api-access-vbcsh\") pod \"5049975b-44d3-44ef-98d8-94691dcb042f\" (UID: \"5049975b-44d3-44ef-98d8-94691dcb042f\") " Dec 15 05:47:08 crc kubenswrapper[4747]: I1215 05:47:08.993810 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5049975b-44d3-44ef-98d8-94691dcb042f-kube-api-access-vbcsh" (OuterVolumeSpecName: "kube-api-access-vbcsh") pod "5049975b-44d3-44ef-98d8-94691dcb042f" (UID: "5049975b-44d3-44ef-98d8-94691dcb042f"). InnerVolumeSpecName "kube-api-access-vbcsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:47:09 crc kubenswrapper[4747]: I1215 05:47:09.090385 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5049975b-44d3-44ef-98d8-94691dcb042f-util\") pod \"5049975b-44d3-44ef-98d8-94691dcb042f\" (UID: \"5049975b-44d3-44ef-98d8-94691dcb042f\") " Dec 15 05:47:09 crc kubenswrapper[4747]: I1215 05:47:09.090507 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5049975b-44d3-44ef-98d8-94691dcb042f-bundle\") pod \"5049975b-44d3-44ef-98d8-94691dcb042f\" (UID: \"5049975b-44d3-44ef-98d8-94691dcb042f\") " Dec 15 05:47:09 crc kubenswrapper[4747]: I1215 05:47:09.090833 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbcsh\" (UniqueName: \"kubernetes.io/projected/5049975b-44d3-44ef-98d8-94691dcb042f-kube-api-access-vbcsh\") on node \"crc\" DevicePath \"\"" Dec 15 05:47:09 crc kubenswrapper[4747]: I1215 05:47:09.091242 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5049975b-44d3-44ef-98d8-94691dcb042f-bundle" (OuterVolumeSpecName: "bundle") pod "5049975b-44d3-44ef-98d8-94691dcb042f" (UID: "5049975b-44d3-44ef-98d8-94691dcb042f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:47:09 crc kubenswrapper[4747]: I1215 05:47:09.140450 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5049975b-44d3-44ef-98d8-94691dcb042f-util" (OuterVolumeSpecName: "util") pod "5049975b-44d3-44ef-98d8-94691dcb042f" (UID: "5049975b-44d3-44ef-98d8-94691dcb042f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:47:09 crc kubenswrapper[4747]: I1215 05:47:09.191886 4747 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5049975b-44d3-44ef-98d8-94691dcb042f-util\") on node \"crc\" DevicePath \"\"" Dec 15 05:47:09 crc kubenswrapper[4747]: I1215 05:47:09.191912 4747 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5049975b-44d3-44ef-98d8-94691dcb042f-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:47:09 crc kubenswrapper[4747]: I1215 05:47:09.719149 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4sdz5f" event={"ID":"5049975b-44d3-44ef-98d8-94691dcb042f","Type":"ContainerDied","Data":"842958d31d22a0c12d06c7d8b70dd6b5456da828b505165ea24c62d2b84dda63"} Dec 15 05:47:09 crc kubenswrapper[4747]: I1215 05:47:09.719462 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="842958d31d22a0c12d06c7d8b70dd6b5456da828b505165ea24c62d2b84dda63" Dec 15 05:47:09 crc kubenswrapper[4747]: I1215 05:47:09.719389 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4sdz5f" Dec 15 05:47:10 crc kubenswrapper[4747]: I1215 05:47:10.635073 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f" path="/var/lib/kubelet/pods/17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f/volumes" Dec 15 05:47:19 crc kubenswrapper[4747]: I1215 05:47:19.816063 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-75d987bf4c-9srrp"] Dec 15 05:47:19 crc kubenswrapper[4747]: E1215 05:47:19.816786 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5049975b-44d3-44ef-98d8-94691dcb042f" containerName="extract" Dec 15 05:47:19 crc kubenswrapper[4747]: I1215 05:47:19.816800 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="5049975b-44d3-44ef-98d8-94691dcb042f" containerName="extract" Dec 15 05:47:19 crc kubenswrapper[4747]: E1215 05:47:19.816807 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5049975b-44d3-44ef-98d8-94691dcb042f" containerName="util" Dec 15 05:47:19 crc kubenswrapper[4747]: I1215 05:47:19.816813 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="5049975b-44d3-44ef-98d8-94691dcb042f" containerName="util" Dec 15 05:47:19 crc kubenswrapper[4747]: E1215 05:47:19.816826 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5049975b-44d3-44ef-98d8-94691dcb042f" containerName="pull" Dec 15 05:47:19 crc kubenswrapper[4747]: I1215 05:47:19.816831 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="5049975b-44d3-44ef-98d8-94691dcb042f" containerName="pull" Dec 15 05:47:19 crc kubenswrapper[4747]: E1215 05:47:19.816837 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f" containerName="console" Dec 15 05:47:19 crc kubenswrapper[4747]: I1215 05:47:19.816844 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f" containerName="console" Dec 15 05:47:19 crc kubenswrapper[4747]: I1215 05:47:19.816945 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="5049975b-44d3-44ef-98d8-94691dcb042f" containerName="extract" Dec 15 05:47:19 crc kubenswrapper[4747]: I1215 05:47:19.816960 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="17ba2834-a3a7-4dfa-8aa7-b7581ef5d01f" containerName="console" Dec 15 05:47:19 crc kubenswrapper[4747]: I1215 05:47:19.817355 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-75d987bf4c-9srrp" Dec 15 05:47:19 crc kubenswrapper[4747]: I1215 05:47:19.818986 4747 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Dec 15 05:47:19 crc kubenswrapper[4747]: I1215 05:47:19.819973 4747 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-m9258" Dec 15 05:47:19 crc kubenswrapper[4747]: I1215 05:47:19.820293 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 15 05:47:19 crc kubenswrapper[4747]: I1215 05:47:19.821374 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 15 05:47:19 crc kubenswrapper[4747]: I1215 05:47:19.822538 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/adb7b699-78a1-41ed-a24f-2c57a128568e-webhook-cert\") pod \"metallb-operator-controller-manager-75d987bf4c-9srrp\" (UID: \"adb7b699-78a1-41ed-a24f-2c57a128568e\") " pod="metallb-system/metallb-operator-controller-manager-75d987bf4c-9srrp" Dec 15 05:47:19 crc kubenswrapper[4747]: I1215 05:47:19.822718 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/adb7b699-78a1-41ed-a24f-2c57a128568e-apiservice-cert\") pod \"metallb-operator-controller-manager-75d987bf4c-9srrp\" (UID: \"adb7b699-78a1-41ed-a24f-2c57a128568e\") " pod="metallb-system/metallb-operator-controller-manager-75d987bf4c-9srrp" Dec 15 05:47:19 crc kubenswrapper[4747]: I1215 05:47:19.822832 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gng5g\" (UniqueName: \"kubernetes.io/projected/adb7b699-78a1-41ed-a24f-2c57a128568e-kube-api-access-gng5g\") pod \"metallb-operator-controller-manager-75d987bf4c-9srrp\" (UID: \"adb7b699-78a1-41ed-a24f-2c57a128568e\") " pod="metallb-system/metallb-operator-controller-manager-75d987bf4c-9srrp" Dec 15 05:47:19 crc kubenswrapper[4747]: I1215 05:47:19.823060 4747 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 15 05:47:19 crc kubenswrapper[4747]: I1215 05:47:19.826319 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-75d987bf4c-9srrp"] Dec 15 05:47:19 crc kubenswrapper[4747]: I1215 05:47:19.923945 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/adb7b699-78a1-41ed-a24f-2c57a128568e-webhook-cert\") pod \"metallb-operator-controller-manager-75d987bf4c-9srrp\" (UID: \"adb7b699-78a1-41ed-a24f-2c57a128568e\") " pod="metallb-system/metallb-operator-controller-manager-75d987bf4c-9srrp" Dec 15 05:47:19 crc kubenswrapper[4747]: I1215 05:47:19.924127 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/adb7b699-78a1-41ed-a24f-2c57a128568e-apiservice-cert\") pod \"metallb-operator-controller-manager-75d987bf4c-9srrp\" (UID: \"adb7b699-78a1-41ed-a24f-2c57a128568e\") " pod="metallb-system/metallb-operator-controller-manager-75d987bf4c-9srrp" Dec 15 05:47:19 crc kubenswrapper[4747]: I1215 05:47:19.924213 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gng5g\" (UniqueName: \"kubernetes.io/projected/adb7b699-78a1-41ed-a24f-2c57a128568e-kube-api-access-gng5g\") pod \"metallb-operator-controller-manager-75d987bf4c-9srrp\" (UID: \"adb7b699-78a1-41ed-a24f-2c57a128568e\") " pod="metallb-system/metallb-operator-controller-manager-75d987bf4c-9srrp" Dec 15 05:47:19 crc kubenswrapper[4747]: I1215 05:47:19.931394 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/adb7b699-78a1-41ed-a24f-2c57a128568e-apiservice-cert\") pod \"metallb-operator-controller-manager-75d987bf4c-9srrp\" (UID: \"adb7b699-78a1-41ed-a24f-2c57a128568e\") " pod="metallb-system/metallb-operator-controller-manager-75d987bf4c-9srrp" Dec 15 05:47:19 crc kubenswrapper[4747]: I1215 05:47:19.937622 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/adb7b699-78a1-41ed-a24f-2c57a128568e-webhook-cert\") pod \"metallb-operator-controller-manager-75d987bf4c-9srrp\" (UID: \"adb7b699-78a1-41ed-a24f-2c57a128568e\") " pod="metallb-system/metallb-operator-controller-manager-75d987bf4c-9srrp" Dec 15 05:47:19 crc kubenswrapper[4747]: I1215 05:47:19.938143 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gng5g\" (UniqueName: \"kubernetes.io/projected/adb7b699-78a1-41ed-a24f-2c57a128568e-kube-api-access-gng5g\") pod \"metallb-operator-controller-manager-75d987bf4c-9srrp\" (UID: \"adb7b699-78a1-41ed-a24f-2c57a128568e\") " pod="metallb-system/metallb-operator-controller-manager-75d987bf4c-9srrp" Dec 15 05:47:20 crc kubenswrapper[4747]: I1215 05:47:20.132135 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-75d987bf4c-9srrp" Dec 15 05:47:20 crc kubenswrapper[4747]: I1215 05:47:20.191813 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5c5c75c884-t8trz"] Dec 15 05:47:20 crc kubenswrapper[4747]: I1215 05:47:20.192688 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5c5c75c884-t8trz" Dec 15 05:47:20 crc kubenswrapper[4747]: I1215 05:47:20.199272 4747 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 15 05:47:20 crc kubenswrapper[4747]: I1215 05:47:20.199420 4747 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-pmrf5" Dec 15 05:47:20 crc kubenswrapper[4747]: I1215 05:47:20.199758 4747 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 15 05:47:20 crc kubenswrapper[4747]: I1215 05:47:20.221549 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5c5c75c884-t8trz"] Dec 15 05:47:20 crc kubenswrapper[4747]: I1215 05:47:20.228470 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3a13fc24-266f-433f-bbff-0cd3d1fc29fc-webhook-cert\") pod \"metallb-operator-webhook-server-5c5c75c884-t8trz\" (UID: \"3a13fc24-266f-433f-bbff-0cd3d1fc29fc\") " pod="metallb-system/metallb-operator-webhook-server-5c5c75c884-t8trz" Dec 15 05:47:20 crc kubenswrapper[4747]: I1215 05:47:20.228587 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3a13fc24-266f-433f-bbff-0cd3d1fc29fc-apiservice-cert\") pod \"metallb-operator-webhook-server-5c5c75c884-t8trz\" (UID: \"3a13fc24-266f-433f-bbff-0cd3d1fc29fc\") " pod="metallb-system/metallb-operator-webhook-server-5c5c75c884-t8trz" Dec 15 05:47:20 crc kubenswrapper[4747]: I1215 05:47:20.228665 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-592kq\" (UniqueName: \"kubernetes.io/projected/3a13fc24-266f-433f-bbff-0cd3d1fc29fc-kube-api-access-592kq\") pod \"metallb-operator-webhook-server-5c5c75c884-t8trz\" (UID: \"3a13fc24-266f-433f-bbff-0cd3d1fc29fc\") " pod="metallb-system/metallb-operator-webhook-server-5c5c75c884-t8trz" Dec 15 05:47:20 crc kubenswrapper[4747]: I1215 05:47:20.330919 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3a13fc24-266f-433f-bbff-0cd3d1fc29fc-webhook-cert\") pod \"metallb-operator-webhook-server-5c5c75c884-t8trz\" (UID: \"3a13fc24-266f-433f-bbff-0cd3d1fc29fc\") " pod="metallb-system/metallb-operator-webhook-server-5c5c75c884-t8trz" Dec 15 05:47:20 crc kubenswrapper[4747]: I1215 05:47:20.331395 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3a13fc24-266f-433f-bbff-0cd3d1fc29fc-apiservice-cert\") pod \"metallb-operator-webhook-server-5c5c75c884-t8trz\" (UID: \"3a13fc24-266f-433f-bbff-0cd3d1fc29fc\") " pod="metallb-system/metallb-operator-webhook-server-5c5c75c884-t8trz" Dec 15 05:47:20 crc kubenswrapper[4747]: I1215 05:47:20.331447 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-592kq\" (UniqueName: \"kubernetes.io/projected/3a13fc24-266f-433f-bbff-0cd3d1fc29fc-kube-api-access-592kq\") pod \"metallb-operator-webhook-server-5c5c75c884-t8trz\" (UID: \"3a13fc24-266f-433f-bbff-0cd3d1fc29fc\") " pod="metallb-system/metallb-operator-webhook-server-5c5c75c884-t8trz" Dec 15 05:47:20 crc kubenswrapper[4747]: I1215 05:47:20.345354 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3a13fc24-266f-433f-bbff-0cd3d1fc29fc-apiservice-cert\") pod \"metallb-operator-webhook-server-5c5c75c884-t8trz\" (UID: \"3a13fc24-266f-433f-bbff-0cd3d1fc29fc\") " pod="metallb-system/metallb-operator-webhook-server-5c5c75c884-t8trz" Dec 15 05:47:20 crc kubenswrapper[4747]: I1215 05:47:20.350449 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-592kq\" (UniqueName: \"kubernetes.io/projected/3a13fc24-266f-433f-bbff-0cd3d1fc29fc-kube-api-access-592kq\") pod \"metallb-operator-webhook-server-5c5c75c884-t8trz\" (UID: \"3a13fc24-266f-433f-bbff-0cd3d1fc29fc\") " pod="metallb-system/metallb-operator-webhook-server-5c5c75c884-t8trz" Dec 15 05:47:20 crc kubenswrapper[4747]: I1215 05:47:20.352368 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3a13fc24-266f-433f-bbff-0cd3d1fc29fc-webhook-cert\") pod \"metallb-operator-webhook-server-5c5c75c884-t8trz\" (UID: \"3a13fc24-266f-433f-bbff-0cd3d1fc29fc\") " pod="metallb-system/metallb-operator-webhook-server-5c5c75c884-t8trz" Dec 15 05:47:20 crc kubenswrapper[4747]: I1215 05:47:20.512242 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5c5c75c884-t8trz" Dec 15 05:47:20 crc kubenswrapper[4747]: I1215 05:47:20.548167 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-75d987bf4c-9srrp"] Dec 15 05:47:20 crc kubenswrapper[4747]: W1215 05:47:20.553488 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadb7b699_78a1_41ed_a24f_2c57a128568e.slice/crio-18e66403a0e62b7eeb9fc1000717d3e49961a299692cb6575617898142868ea7 WatchSource:0}: Error finding container 18e66403a0e62b7eeb9fc1000717d3e49961a299692cb6575617898142868ea7: Status 404 returned error can't find the container with id 18e66403a0e62b7eeb9fc1000717d3e49961a299692cb6575617898142868ea7 Dec 15 05:47:20 crc kubenswrapper[4747]: I1215 05:47:20.692264 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5c5c75c884-t8trz"] Dec 15 05:47:20 crc kubenswrapper[4747]: W1215 05:47:20.696159 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a13fc24_266f_433f_bbff_0cd3d1fc29fc.slice/crio-142ddedd4a46eb6076702e35893fd3f87166109c955b5c775123b25892e9e8c1 WatchSource:0}: Error finding container 142ddedd4a46eb6076702e35893fd3f87166109c955b5c775123b25892e9e8c1: Status 404 returned error can't find the container with id 142ddedd4a46eb6076702e35893fd3f87166109c955b5c775123b25892e9e8c1 Dec 15 05:47:20 crc kubenswrapper[4747]: I1215 05:47:20.780381 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-75d987bf4c-9srrp" event={"ID":"adb7b699-78a1-41ed-a24f-2c57a128568e","Type":"ContainerStarted","Data":"18e66403a0e62b7eeb9fc1000717d3e49961a299692cb6575617898142868ea7"} Dec 15 05:47:20 crc kubenswrapper[4747]: I1215 05:47:20.781519 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5c5c75c884-t8trz" event={"ID":"3a13fc24-266f-433f-bbff-0cd3d1fc29fc","Type":"ContainerStarted","Data":"142ddedd4a46eb6076702e35893fd3f87166109c955b5c775123b25892e9e8c1"} Dec 15 05:47:25 crc kubenswrapper[4747]: I1215 05:47:25.809364 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-75d987bf4c-9srrp" event={"ID":"adb7b699-78a1-41ed-a24f-2c57a128568e","Type":"ContainerStarted","Data":"0c78e643bd50a21a5cd10d0747b7792da2b8c665e4364130a1eb21405b746bc8"} Dec 15 05:47:25 crc kubenswrapper[4747]: I1215 05:47:25.810335 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-75d987bf4c-9srrp" Dec 15 05:47:25 crc kubenswrapper[4747]: I1215 05:47:25.811510 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5c5c75c884-t8trz" event={"ID":"3a13fc24-266f-433f-bbff-0cd3d1fc29fc","Type":"ContainerStarted","Data":"32d1cd38e4fce8ec26f4977b8e07cbe8fefabc80f455620d66e18cd45712851b"} Dec 15 05:47:25 crc kubenswrapper[4747]: I1215 05:47:25.811715 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5c5c75c884-t8trz" Dec 15 05:47:25 crc kubenswrapper[4747]: I1215 05:47:25.829793 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-75d987bf4c-9srrp" podStartSLOduration=2.567640608 podStartE2EDuration="6.829770642s" podCreationTimestamp="2025-12-15 05:47:19 +0000 UTC" firstStartedPulling="2025-12-15 05:47:20.556550837 +0000 UTC m=+604.253062764" lastFinishedPulling="2025-12-15 05:47:24.818680881 +0000 UTC m=+608.515192798" observedRunningTime="2025-12-15 05:47:25.827524308 +0000 UTC m=+609.524036224" watchObservedRunningTime="2025-12-15 05:47:25.829770642 +0000 UTC m=+609.526282559" Dec 15 05:47:25 crc kubenswrapper[4747]: I1215 05:47:25.852077 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5c5c75c884-t8trz" podStartSLOduration=1.7184087030000001 podStartE2EDuration="5.852056556s" podCreationTimestamp="2025-12-15 05:47:20 +0000 UTC" firstStartedPulling="2025-12-15 05:47:20.699147413 +0000 UTC m=+604.395659331" lastFinishedPulling="2025-12-15 05:47:24.832795266 +0000 UTC m=+608.529307184" observedRunningTime="2025-12-15 05:47:25.851716547 +0000 UTC m=+609.548228464" watchObservedRunningTime="2025-12-15 05:47:25.852056556 +0000 UTC m=+609.548568473" Dec 15 05:47:28 crc kubenswrapper[4747]: I1215 05:47:28.865759 4747 patch_prober.go:28] interesting pod/machine-config-daemon-nldtn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 15 05:47:28 crc kubenswrapper[4747]: I1215 05:47:28.866136 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 15 05:47:40 crc kubenswrapper[4747]: I1215 05:47:40.519833 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5c5c75c884-t8trz" Dec 15 05:47:58 crc kubenswrapper[4747]: I1215 05:47:58.864975 4747 patch_prober.go:28] interesting pod/machine-config-daemon-nldtn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 15 05:47:58 crc kubenswrapper[4747]: I1215 05:47:58.865668 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 15 05:47:58 crc kubenswrapper[4747]: I1215 05:47:58.865744 4747 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" Dec 15 05:47:58 crc kubenswrapper[4747]: I1215 05:47:58.866333 4747 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a17943ccf4995eb4ff240ba732355ee9e9020e929a2275df58776bf83d66a3b3"} pod="openshift-machine-config-operator/machine-config-daemon-nldtn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 15 05:47:58 crc kubenswrapper[4747]: I1215 05:47:58.866402 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" containerID="cri-o://a17943ccf4995eb4ff240ba732355ee9e9020e929a2275df58776bf83d66a3b3" gracePeriod=600 Dec 15 05:47:59 crc kubenswrapper[4747]: I1215 05:47:59.038474 4747 generic.go:334] "Generic (PLEG): container finished" podID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerID="a17943ccf4995eb4ff240ba732355ee9e9020e929a2275df58776bf83d66a3b3" exitCode=0 Dec 15 05:47:59 crc kubenswrapper[4747]: I1215 05:47:59.038624 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" event={"ID":"1d50e5c9-7ce9-40c0-b942-01031654d27c","Type":"ContainerDied","Data":"a17943ccf4995eb4ff240ba732355ee9e9020e929a2275df58776bf83d66a3b3"} Dec 15 05:47:59 crc kubenswrapper[4747]: I1215 05:47:59.038664 4747 scope.go:117] "RemoveContainer" containerID="69403043616ef8b443997fe2ec8a367f1ef1de28024e4cb945e644c4878527e7" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.047307 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" event={"ID":"1d50e5c9-7ce9-40c0-b942-01031654d27c","Type":"ContainerStarted","Data":"1f6be68cbfc9d5eee88cda586fa59c68181c75ecba41c64c7ee60c7ad6d664b8"} Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.135902 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-75d987bf4c-9srrp" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.703190 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-d98xw"] Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.705282 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-d98xw" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.708149 4747 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.708418 4747 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-c2wmm" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.708443 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.710449 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-szbwq"] Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.710978 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-szbwq" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.717689 4747 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.722824 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-szbwq"] Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.764982 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/cf41786d-c244-4754-ba59-4a9b6c834f9f-frr-sockets\") pod \"frr-k8s-d98xw\" (UID: \"cf41786d-c244-4754-ba59-4a9b6c834f9f\") " pod="metallb-system/frr-k8s-d98xw" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.765099 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7b46267f-c728-4995-9817-87b793f77a58-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-szbwq\" (UID: \"7b46267f-c728-4995-9817-87b793f77a58\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-szbwq" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.765130 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/cf41786d-c244-4754-ba59-4a9b6c834f9f-metrics\") pod \"frr-k8s-d98xw\" (UID: \"cf41786d-c244-4754-ba59-4a9b6c834f9f\") " pod="metallb-system/frr-k8s-d98xw" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.765149 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdb8m\" (UniqueName: \"kubernetes.io/projected/cf41786d-c244-4754-ba59-4a9b6c834f9f-kube-api-access-tdb8m\") pod \"frr-k8s-d98xw\" (UID: \"cf41786d-c244-4754-ba59-4a9b6c834f9f\") " pod="metallb-system/frr-k8s-d98xw" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.765170 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/cf41786d-c244-4754-ba59-4a9b6c834f9f-frr-conf\") pod \"frr-k8s-d98xw\" (UID: \"cf41786d-c244-4754-ba59-4a9b6c834f9f\") " pod="metallb-system/frr-k8s-d98xw" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.765186 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/cf41786d-c244-4754-ba59-4a9b6c834f9f-frr-startup\") pod \"frr-k8s-d98xw\" (UID: \"cf41786d-c244-4754-ba59-4a9b6c834f9f\") " pod="metallb-system/frr-k8s-d98xw" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.765205 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rqs4\" (UniqueName: \"kubernetes.io/projected/7b46267f-c728-4995-9817-87b793f77a58-kube-api-access-9rqs4\") pod \"frr-k8s-webhook-server-7784b6fcf-szbwq\" (UID: \"7b46267f-c728-4995-9817-87b793f77a58\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-szbwq" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.765253 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/cf41786d-c244-4754-ba59-4a9b6c834f9f-reloader\") pod \"frr-k8s-d98xw\" (UID: \"cf41786d-c244-4754-ba59-4a9b6c834f9f\") " pod="metallb-system/frr-k8s-d98xw" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.765271 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cf41786d-c244-4754-ba59-4a9b6c834f9f-metrics-certs\") pod \"frr-k8s-d98xw\" (UID: \"cf41786d-c244-4754-ba59-4a9b6c834f9f\") " pod="metallb-system/frr-k8s-d98xw" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.772690 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-rtnx5"] Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.773623 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-rtnx5" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.777233 4747 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.777406 4747 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.777591 4747 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-2cr79" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.777757 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.793072 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-5bddd4b946-84gxw"] Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.794272 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-84gxw" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.801228 4747 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.803496 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-84gxw"] Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.866510 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2fd21575-3653-416d-a59a-d2802bc9bf09-cert\") pod \"controller-5bddd4b946-84gxw\" (UID: \"2fd21575-3653-416d-a59a-d2802bc9bf09\") " pod="metallb-system/controller-5bddd4b946-84gxw" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.866591 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/cf41786d-c244-4754-ba59-4a9b6c834f9f-frr-sockets\") pod \"frr-k8s-d98xw\" (UID: \"cf41786d-c244-4754-ba59-4a9b6c834f9f\") " pod="metallb-system/frr-k8s-d98xw" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.866705 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b943df0c-29b6-42f3-884b-707aaf02c5d0-metallb-excludel2\") pod \"speaker-rtnx5\" (UID: \"b943df0c-29b6-42f3-884b-707aaf02c5d0\") " pod="metallb-system/speaker-rtnx5" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.866733 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b943df0c-29b6-42f3-884b-707aaf02c5d0-memberlist\") pod \"speaker-rtnx5\" (UID: \"b943df0c-29b6-42f3-884b-707aaf02c5d0\") " pod="metallb-system/speaker-rtnx5" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.866760 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z54v\" (UniqueName: \"kubernetes.io/projected/b943df0c-29b6-42f3-884b-707aaf02c5d0-kube-api-access-9z54v\") pod \"speaker-rtnx5\" (UID: \"b943df0c-29b6-42f3-884b-707aaf02c5d0\") " pod="metallb-system/speaker-rtnx5" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.866844 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b943df0c-29b6-42f3-884b-707aaf02c5d0-metrics-certs\") pod \"speaker-rtnx5\" (UID: \"b943df0c-29b6-42f3-884b-707aaf02c5d0\") " pod="metallb-system/speaker-rtnx5" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.866960 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f85x\" (UniqueName: \"kubernetes.io/projected/2fd21575-3653-416d-a59a-d2802bc9bf09-kube-api-access-2f85x\") pod \"controller-5bddd4b946-84gxw\" (UID: \"2fd21575-3653-416d-a59a-d2802bc9bf09\") " pod="metallb-system/controller-5bddd4b946-84gxw" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.866991 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/cf41786d-c244-4754-ba59-4a9b6c834f9f-frr-sockets\") pod \"frr-k8s-d98xw\" (UID: \"cf41786d-c244-4754-ba59-4a9b6c834f9f\") " pod="metallb-system/frr-k8s-d98xw" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.867058 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/cf41786d-c244-4754-ba59-4a9b6c834f9f-metrics\") pod \"frr-k8s-d98xw\" (UID: \"cf41786d-c244-4754-ba59-4a9b6c834f9f\") " pod="metallb-system/frr-k8s-d98xw" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.867102 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdb8m\" (UniqueName: \"kubernetes.io/projected/cf41786d-c244-4754-ba59-4a9b6c834f9f-kube-api-access-tdb8m\") pod \"frr-k8s-d98xw\" (UID: \"cf41786d-c244-4754-ba59-4a9b6c834f9f\") " pod="metallb-system/frr-k8s-d98xw" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.867120 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7b46267f-c728-4995-9817-87b793f77a58-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-szbwq\" (UID: \"7b46267f-c728-4995-9817-87b793f77a58\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-szbwq" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.867276 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/cf41786d-c244-4754-ba59-4a9b6c834f9f-metrics\") pod \"frr-k8s-d98xw\" (UID: \"cf41786d-c244-4754-ba59-4a9b6c834f9f\") " pod="metallb-system/frr-k8s-d98xw" Dec 15 05:48:00 crc kubenswrapper[4747]: E1215 05:48:00.867297 4747 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Dec 15 05:48:00 crc kubenswrapper[4747]: E1215 05:48:00.867381 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b46267f-c728-4995-9817-87b793f77a58-cert podName:7b46267f-c728-4995-9817-87b793f77a58 nodeName:}" failed. No retries permitted until 2025-12-15 05:48:01.367360944 +0000 UTC m=+645.063872861 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7b46267f-c728-4995-9817-87b793f77a58-cert") pod "frr-k8s-webhook-server-7784b6fcf-szbwq" (UID: "7b46267f-c728-4995-9817-87b793f77a58") : secret "frr-k8s-webhook-server-cert" not found Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.867514 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/cf41786d-c244-4754-ba59-4a9b6c834f9f-frr-conf\") pod \"frr-k8s-d98xw\" (UID: \"cf41786d-c244-4754-ba59-4a9b6c834f9f\") " pod="metallb-system/frr-k8s-d98xw" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.867800 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/cf41786d-c244-4754-ba59-4a9b6c834f9f-frr-conf\") pod \"frr-k8s-d98xw\" (UID: \"cf41786d-c244-4754-ba59-4a9b6c834f9f\") " pod="metallb-system/frr-k8s-d98xw" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.867833 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/cf41786d-c244-4754-ba59-4a9b6c834f9f-frr-startup\") pod \"frr-k8s-d98xw\" (UID: \"cf41786d-c244-4754-ba59-4a9b6c834f9f\") " pod="metallb-system/frr-k8s-d98xw" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.867851 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rqs4\" (UniqueName: \"kubernetes.io/projected/7b46267f-c728-4995-9817-87b793f77a58-kube-api-access-9rqs4\") pod \"frr-k8s-webhook-server-7784b6fcf-szbwq\" (UID: \"7b46267f-c728-4995-9817-87b793f77a58\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-szbwq" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.867896 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2fd21575-3653-416d-a59a-d2802bc9bf09-metrics-certs\") pod \"controller-5bddd4b946-84gxw\" (UID: \"2fd21575-3653-416d-a59a-d2802bc9bf09\") " pod="metallb-system/controller-5bddd4b946-84gxw" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.867918 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/cf41786d-c244-4754-ba59-4a9b6c834f9f-reloader\") pod \"frr-k8s-d98xw\" (UID: \"cf41786d-c244-4754-ba59-4a9b6c834f9f\") " pod="metallb-system/frr-k8s-d98xw" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.867952 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cf41786d-c244-4754-ba59-4a9b6c834f9f-metrics-certs\") pod \"frr-k8s-d98xw\" (UID: \"cf41786d-c244-4754-ba59-4a9b6c834f9f\") " pod="metallb-system/frr-k8s-d98xw" Dec 15 05:48:00 crc kubenswrapper[4747]: E1215 05:48:00.868296 4747 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Dec 15 05:48:00 crc kubenswrapper[4747]: E1215 05:48:00.868350 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cf41786d-c244-4754-ba59-4a9b6c834f9f-metrics-certs podName:cf41786d-c244-4754-ba59-4a9b6c834f9f nodeName:}" failed. No retries permitted until 2025-12-15 05:48:01.368330557 +0000 UTC m=+645.064842474 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cf41786d-c244-4754-ba59-4a9b6c834f9f-metrics-certs") pod "frr-k8s-d98xw" (UID: "cf41786d-c244-4754-ba59-4a9b6c834f9f") : secret "frr-k8s-certs-secret" not found Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.869074 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/cf41786d-c244-4754-ba59-4a9b6c834f9f-frr-startup\") pod \"frr-k8s-d98xw\" (UID: \"cf41786d-c244-4754-ba59-4a9b6c834f9f\") " pod="metallb-system/frr-k8s-d98xw" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.869274 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/cf41786d-c244-4754-ba59-4a9b6c834f9f-reloader\") pod \"frr-k8s-d98xw\" (UID: \"cf41786d-c244-4754-ba59-4a9b6c834f9f\") " pod="metallb-system/frr-k8s-d98xw" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.892789 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdb8m\" (UniqueName: \"kubernetes.io/projected/cf41786d-c244-4754-ba59-4a9b6c834f9f-kube-api-access-tdb8m\") pod \"frr-k8s-d98xw\" (UID: \"cf41786d-c244-4754-ba59-4a9b6c834f9f\") " pod="metallb-system/frr-k8s-d98xw" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.892786 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rqs4\" (UniqueName: \"kubernetes.io/projected/7b46267f-c728-4995-9817-87b793f77a58-kube-api-access-9rqs4\") pod \"frr-k8s-webhook-server-7784b6fcf-szbwq\" (UID: \"7b46267f-c728-4995-9817-87b793f77a58\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-szbwq" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.969412 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2fd21575-3653-416d-a59a-d2802bc9bf09-metrics-certs\") pod \"controller-5bddd4b946-84gxw\" (UID: \"2fd21575-3653-416d-a59a-d2802bc9bf09\") " pod="metallb-system/controller-5bddd4b946-84gxw" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.969488 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2fd21575-3653-416d-a59a-d2802bc9bf09-cert\") pod \"controller-5bddd4b946-84gxw\" (UID: \"2fd21575-3653-416d-a59a-d2802bc9bf09\") " pod="metallb-system/controller-5bddd4b946-84gxw" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.969533 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b943df0c-29b6-42f3-884b-707aaf02c5d0-metallb-excludel2\") pod \"speaker-rtnx5\" (UID: \"b943df0c-29b6-42f3-884b-707aaf02c5d0\") " pod="metallb-system/speaker-rtnx5" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.969556 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b943df0c-29b6-42f3-884b-707aaf02c5d0-memberlist\") pod \"speaker-rtnx5\" (UID: \"b943df0c-29b6-42f3-884b-707aaf02c5d0\") " pod="metallb-system/speaker-rtnx5" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.969576 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z54v\" (UniqueName: \"kubernetes.io/projected/b943df0c-29b6-42f3-884b-707aaf02c5d0-kube-api-access-9z54v\") pod \"speaker-rtnx5\" (UID: \"b943df0c-29b6-42f3-884b-707aaf02c5d0\") " pod="metallb-system/speaker-rtnx5" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.969593 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b943df0c-29b6-42f3-884b-707aaf02c5d0-metrics-certs\") pod \"speaker-rtnx5\" (UID: \"b943df0c-29b6-42f3-884b-707aaf02c5d0\") " pod="metallb-system/speaker-rtnx5" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.969627 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f85x\" (UniqueName: \"kubernetes.io/projected/2fd21575-3653-416d-a59a-d2802bc9bf09-kube-api-access-2f85x\") pod \"controller-5bddd4b946-84gxw\" (UID: \"2fd21575-3653-416d-a59a-d2802bc9bf09\") " pod="metallb-system/controller-5bddd4b946-84gxw" Dec 15 05:48:00 crc kubenswrapper[4747]: E1215 05:48:00.970150 4747 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 15 05:48:00 crc kubenswrapper[4747]: E1215 05:48:00.970201 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b943df0c-29b6-42f3-884b-707aaf02c5d0-memberlist podName:b943df0c-29b6-42f3-884b-707aaf02c5d0 nodeName:}" failed. No retries permitted until 2025-12-15 05:48:01.470187073 +0000 UTC m=+645.166698990 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b943df0c-29b6-42f3-884b-707aaf02c5d0-memberlist") pod "speaker-rtnx5" (UID: "b943df0c-29b6-42f3-884b-707aaf02c5d0") : secret "metallb-memberlist" not found Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.971165 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b943df0c-29b6-42f3-884b-707aaf02c5d0-metallb-excludel2\") pod \"speaker-rtnx5\" (UID: \"b943df0c-29b6-42f3-884b-707aaf02c5d0\") " pod="metallb-system/speaker-rtnx5" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.973299 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b943df0c-29b6-42f3-884b-707aaf02c5d0-metrics-certs\") pod \"speaker-rtnx5\" (UID: \"b943df0c-29b6-42f3-884b-707aaf02c5d0\") " pod="metallb-system/speaker-rtnx5" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.973484 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2fd21575-3653-416d-a59a-d2802bc9bf09-metrics-certs\") pod \"controller-5bddd4b946-84gxw\" (UID: \"2fd21575-3653-416d-a59a-d2802bc9bf09\") " pod="metallb-system/controller-5bddd4b946-84gxw" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.974260 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2fd21575-3653-416d-a59a-d2802bc9bf09-cert\") pod \"controller-5bddd4b946-84gxw\" (UID: \"2fd21575-3653-416d-a59a-d2802bc9bf09\") " pod="metallb-system/controller-5bddd4b946-84gxw" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.985065 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z54v\" (UniqueName: \"kubernetes.io/projected/b943df0c-29b6-42f3-884b-707aaf02c5d0-kube-api-access-9z54v\") pod \"speaker-rtnx5\" (UID: \"b943df0c-29b6-42f3-884b-707aaf02c5d0\") " pod="metallb-system/speaker-rtnx5" Dec 15 05:48:00 crc kubenswrapper[4747]: I1215 05:48:00.986354 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f85x\" (UniqueName: \"kubernetes.io/projected/2fd21575-3653-416d-a59a-d2802bc9bf09-kube-api-access-2f85x\") pod \"controller-5bddd4b946-84gxw\" (UID: \"2fd21575-3653-416d-a59a-d2802bc9bf09\") " pod="metallb-system/controller-5bddd4b946-84gxw" Dec 15 05:48:01 crc kubenswrapper[4747]: I1215 05:48:01.116207 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-84gxw" Dec 15 05:48:01 crc kubenswrapper[4747]: I1215 05:48:01.376771 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7b46267f-c728-4995-9817-87b793f77a58-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-szbwq\" (UID: \"7b46267f-c728-4995-9817-87b793f77a58\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-szbwq" Dec 15 05:48:01 crc kubenswrapper[4747]: I1215 05:48:01.377186 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cf41786d-c244-4754-ba59-4a9b6c834f9f-metrics-certs\") pod \"frr-k8s-d98xw\" (UID: \"cf41786d-c244-4754-ba59-4a9b6c834f9f\") " pod="metallb-system/frr-k8s-d98xw" Dec 15 05:48:01 crc kubenswrapper[4747]: I1215 05:48:01.381408 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7b46267f-c728-4995-9817-87b793f77a58-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-szbwq\" (UID: \"7b46267f-c728-4995-9817-87b793f77a58\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-szbwq" Dec 15 05:48:01 crc kubenswrapper[4747]: I1215 05:48:01.381547 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cf41786d-c244-4754-ba59-4a9b6c834f9f-metrics-certs\") pod \"frr-k8s-d98xw\" (UID: \"cf41786d-c244-4754-ba59-4a9b6c834f9f\") " pod="metallb-system/frr-k8s-d98xw" Dec 15 05:48:01 crc kubenswrapper[4747]: I1215 05:48:01.478656 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b943df0c-29b6-42f3-884b-707aaf02c5d0-memberlist\") pod \"speaker-rtnx5\" (UID: \"b943df0c-29b6-42f3-884b-707aaf02c5d0\") " pod="metallb-system/speaker-rtnx5" Dec 15 05:48:01 crc kubenswrapper[4747]: E1215 05:48:01.478827 4747 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 15 05:48:01 crc kubenswrapper[4747]: E1215 05:48:01.478921 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b943df0c-29b6-42f3-884b-707aaf02c5d0-memberlist podName:b943df0c-29b6-42f3-884b-707aaf02c5d0 nodeName:}" failed. No retries permitted until 2025-12-15 05:48:02.478896745 +0000 UTC m=+646.175408672 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b943df0c-29b6-42f3-884b-707aaf02c5d0-memberlist") pod "speaker-rtnx5" (UID: "b943df0c-29b6-42f3-884b-707aaf02c5d0") : secret "metallb-memberlist" not found Dec 15 05:48:01 crc kubenswrapper[4747]: I1215 05:48:01.569657 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-84gxw"] Dec 15 05:48:01 crc kubenswrapper[4747]: I1215 05:48:01.622799 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-d98xw" Dec 15 05:48:01 crc kubenswrapper[4747]: I1215 05:48:01.629046 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-szbwq" Dec 15 05:48:02 crc kubenswrapper[4747]: I1215 05:48:02.013200 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-szbwq"] Dec 15 05:48:02 crc kubenswrapper[4747]: W1215 05:48:02.018120 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b46267f_c728_4995_9817_87b793f77a58.slice/crio-6241e9f18b2f0b036c2c14da6ff1bf1adb940ea0d0d880940e94afe5c6a31f62 WatchSource:0}: Error finding container 6241e9f18b2f0b036c2c14da6ff1bf1adb940ea0d0d880940e94afe5c6a31f62: Status 404 returned error can't find the container with id 6241e9f18b2f0b036c2c14da6ff1bf1adb940ea0d0d880940e94afe5c6a31f62 Dec 15 05:48:02 crc kubenswrapper[4747]: I1215 05:48:02.062493 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-84gxw" event={"ID":"2fd21575-3653-416d-a59a-d2802bc9bf09","Type":"ContainerStarted","Data":"e570f0f970918df5db76283a2c6966799a22905b9d3c2084f53502bf9bd69cf2"} Dec 15 05:48:02 crc kubenswrapper[4747]: I1215 05:48:02.062551 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-84gxw" event={"ID":"2fd21575-3653-416d-a59a-d2802bc9bf09","Type":"ContainerStarted","Data":"775f62e75e501b4fa662ffca243b3d52713e46021a8cd7c8e80167fe7ffaafa3"} Dec 15 05:48:02 crc kubenswrapper[4747]: I1215 05:48:02.062570 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-84gxw" event={"ID":"2fd21575-3653-416d-a59a-d2802bc9bf09","Type":"ContainerStarted","Data":"ae0a85cffc897404118db04d880d07f9842f328b69703539de4de00f0cb1ca73"} Dec 15 05:48:02 crc kubenswrapper[4747]: I1215 05:48:02.062634 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-5bddd4b946-84gxw" Dec 15 05:48:02 crc kubenswrapper[4747]: I1215 05:48:02.064384 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-szbwq" event={"ID":"7b46267f-c728-4995-9817-87b793f77a58","Type":"ContainerStarted","Data":"6241e9f18b2f0b036c2c14da6ff1bf1adb940ea0d0d880940e94afe5c6a31f62"} Dec 15 05:48:02 crc kubenswrapper[4747]: I1215 05:48:02.065810 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d98xw" event={"ID":"cf41786d-c244-4754-ba59-4a9b6c834f9f","Type":"ContainerStarted","Data":"8d1b5ce3d3fa64a78d8648299c7c979a2935fbdec76402cfb5ab4532740c63e2"} Dec 15 05:48:02 crc kubenswrapper[4747]: I1215 05:48:02.084386 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-5bddd4b946-84gxw" podStartSLOduration=2.084362692 podStartE2EDuration="2.084362692s" podCreationTimestamp="2025-12-15 05:48:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:48:02.078137804 +0000 UTC m=+645.774649721" watchObservedRunningTime="2025-12-15 05:48:02.084362692 +0000 UTC m=+645.780874609" Dec 15 05:48:02 crc kubenswrapper[4747]: I1215 05:48:02.492673 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b943df0c-29b6-42f3-884b-707aaf02c5d0-memberlist\") pod \"speaker-rtnx5\" (UID: \"b943df0c-29b6-42f3-884b-707aaf02c5d0\") " pod="metallb-system/speaker-rtnx5" Dec 15 05:48:02 crc kubenswrapper[4747]: I1215 05:48:02.498209 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b943df0c-29b6-42f3-884b-707aaf02c5d0-memberlist\") pod \"speaker-rtnx5\" (UID: \"b943df0c-29b6-42f3-884b-707aaf02c5d0\") " pod="metallb-system/speaker-rtnx5" Dec 15 05:48:02 crc kubenswrapper[4747]: I1215 05:48:02.609660 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-rtnx5" Dec 15 05:48:02 crc kubenswrapper[4747]: W1215 05:48:02.631636 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb943df0c_29b6_42f3_884b_707aaf02c5d0.slice/crio-7055d0274c8cd038cd626b16d1c1d60714165b0e57d6c55aa6cec4a8f8f9286e WatchSource:0}: Error finding container 7055d0274c8cd038cd626b16d1c1d60714165b0e57d6c55aa6cec4a8f8f9286e: Status 404 returned error can't find the container with id 7055d0274c8cd038cd626b16d1c1d60714165b0e57d6c55aa6cec4a8f8f9286e Dec 15 05:48:03 crc kubenswrapper[4747]: I1215 05:48:03.073485 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-rtnx5" event={"ID":"b943df0c-29b6-42f3-884b-707aaf02c5d0","Type":"ContainerStarted","Data":"72a044c19de9355b3805faf17baf8e318ce35de6fce14c9e15546d927a725bb3"} Dec 15 05:48:03 crc kubenswrapper[4747]: I1215 05:48:03.073914 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-rtnx5" event={"ID":"b943df0c-29b6-42f3-884b-707aaf02c5d0","Type":"ContainerStarted","Data":"ee62a684c8199291035a2693ca8a6ebe470a6ef921b2c6d9fbccc519ec0554aa"} Dec 15 05:48:03 crc kubenswrapper[4747]: I1215 05:48:03.073946 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-rtnx5" event={"ID":"b943df0c-29b6-42f3-884b-707aaf02c5d0","Type":"ContainerStarted","Data":"7055d0274c8cd038cd626b16d1c1d60714165b0e57d6c55aa6cec4a8f8f9286e"} Dec 15 05:48:03 crc kubenswrapper[4747]: I1215 05:48:03.074148 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-rtnx5" Dec 15 05:48:03 crc kubenswrapper[4747]: I1215 05:48:03.089306 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-rtnx5" podStartSLOduration=3.089290033 podStartE2EDuration="3.089290033s" podCreationTimestamp="2025-12-15 05:48:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:48:03.08516745 +0000 UTC m=+646.781679377" watchObservedRunningTime="2025-12-15 05:48:03.089290033 +0000 UTC m=+646.785801950" Dec 15 05:48:09 crc kubenswrapper[4747]: I1215 05:48:09.121044 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-szbwq" event={"ID":"7b46267f-c728-4995-9817-87b793f77a58","Type":"ContainerStarted","Data":"f5d78d53303b56f2eae79af7b265044de6050e7cf47ac961d362e0eaa90315d7"} Dec 15 05:48:09 crc kubenswrapper[4747]: I1215 05:48:09.121778 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-szbwq" Dec 15 05:48:09 crc kubenswrapper[4747]: I1215 05:48:09.124254 4747 generic.go:334] "Generic (PLEG): container finished" podID="cf41786d-c244-4754-ba59-4a9b6c834f9f" containerID="1aaacca5caea560dc82693f1e339ddcdb8cb30f2b9d32fff051c1271d4a070dc" exitCode=0 Dec 15 05:48:09 crc kubenswrapper[4747]: I1215 05:48:09.124316 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d98xw" event={"ID":"cf41786d-c244-4754-ba59-4a9b6c834f9f","Type":"ContainerDied","Data":"1aaacca5caea560dc82693f1e339ddcdb8cb30f2b9d32fff051c1271d4a070dc"} Dec 15 05:48:09 crc kubenswrapper[4747]: I1215 05:48:09.146126 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-szbwq" podStartSLOduration=2.59060893 podStartE2EDuration="9.146111956s" podCreationTimestamp="2025-12-15 05:48:00 +0000 UTC" firstStartedPulling="2025-12-15 05:48:02.020799847 +0000 UTC m=+645.717311764" lastFinishedPulling="2025-12-15 05:48:08.576302872 +0000 UTC m=+652.272814790" observedRunningTime="2025-12-15 05:48:09.143511764 +0000 UTC m=+652.840023682" watchObservedRunningTime="2025-12-15 05:48:09.146111956 +0000 UTC m=+652.842623874" Dec 15 05:48:10 crc kubenswrapper[4747]: I1215 05:48:10.134559 4747 generic.go:334] "Generic (PLEG): container finished" podID="cf41786d-c244-4754-ba59-4a9b6c834f9f" containerID="8b8585e4f4f5cfef80847379e35161105e282c7abbda725b181b4f084d5515bf" exitCode=0 Dec 15 05:48:10 crc kubenswrapper[4747]: I1215 05:48:10.134686 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d98xw" event={"ID":"cf41786d-c244-4754-ba59-4a9b6c834f9f","Type":"ContainerDied","Data":"8b8585e4f4f5cfef80847379e35161105e282c7abbda725b181b4f084d5515bf"} Dec 15 05:48:11 crc kubenswrapper[4747]: I1215 05:48:11.121088 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-5bddd4b946-84gxw" Dec 15 05:48:11 crc kubenswrapper[4747]: I1215 05:48:11.145237 4747 generic.go:334] "Generic (PLEG): container finished" podID="cf41786d-c244-4754-ba59-4a9b6c834f9f" containerID="f05e8293319d20ac8492f579fe3320083f670efc5e3b8222115f935b6ae19dcc" exitCode=0 Dec 15 05:48:11 crc kubenswrapper[4747]: I1215 05:48:11.145280 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d98xw" event={"ID":"cf41786d-c244-4754-ba59-4a9b6c834f9f","Type":"ContainerDied","Data":"f05e8293319d20ac8492f579fe3320083f670efc5e3b8222115f935b6ae19dcc"} Dec 15 05:48:12 crc kubenswrapper[4747]: I1215 05:48:12.160370 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d98xw" event={"ID":"cf41786d-c244-4754-ba59-4a9b6c834f9f","Type":"ContainerStarted","Data":"4c7047be4d3b4d02f6e31ef60a79f9933e22dc16e051ecc21abe56df5e677a7c"} Dec 15 05:48:12 crc kubenswrapper[4747]: I1215 05:48:12.161379 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d98xw" event={"ID":"cf41786d-c244-4754-ba59-4a9b6c834f9f","Type":"ContainerStarted","Data":"559ba4a25c80c3497076bc38d5d17492daa6db9fa950bec73e0a98aedb8df8fe"} Dec 15 05:48:12 crc kubenswrapper[4747]: I1215 05:48:12.161449 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d98xw" event={"ID":"cf41786d-c244-4754-ba59-4a9b6c834f9f","Type":"ContainerStarted","Data":"7904422adc210f4444a600cfd4fa38042af8cb00213af2cddb18b51edfc75e1e"} Dec 15 05:48:12 crc kubenswrapper[4747]: I1215 05:48:12.161478 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-d98xw" Dec 15 05:48:12 crc kubenswrapper[4747]: I1215 05:48:12.161494 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d98xw" event={"ID":"cf41786d-c244-4754-ba59-4a9b6c834f9f","Type":"ContainerStarted","Data":"1ecd5b78deab9122844c3de9ea5f2337d2849542f0c923eb259e2ec90cde86ed"} Dec 15 05:48:12 crc kubenswrapper[4747]: I1215 05:48:12.161505 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d98xw" event={"ID":"cf41786d-c244-4754-ba59-4a9b6c834f9f","Type":"ContainerStarted","Data":"a7602b99329e8536ee3d6ad2369c0f120f8eb0e09a827c2d149773844ca037a5"} Dec 15 05:48:12 crc kubenswrapper[4747]: I1215 05:48:12.161514 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d98xw" event={"ID":"cf41786d-c244-4754-ba59-4a9b6c834f9f","Type":"ContainerStarted","Data":"22b77a0e6966e61e4b41a266b33c26576337fbd9a7ae3b49340d54968c93799f"} Dec 15 05:48:12 crc kubenswrapper[4747]: I1215 05:48:12.185864 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-d98xw" podStartSLOduration=5.331499033 podStartE2EDuration="12.185852139s" podCreationTimestamp="2025-12-15 05:48:00 +0000 UTC" firstStartedPulling="2025-12-15 05:48:01.718830199 +0000 UTC m=+645.415342116" lastFinishedPulling="2025-12-15 05:48:08.573183305 +0000 UTC m=+652.269695222" observedRunningTime="2025-12-15 05:48:12.183343741 +0000 UTC m=+655.879855657" watchObservedRunningTime="2025-12-15 05:48:12.185852139 +0000 UTC m=+655.882364056" Dec 15 05:48:12 crc kubenswrapper[4747]: I1215 05:48:12.614711 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-rtnx5" Dec 15 05:48:16 crc kubenswrapper[4747]: I1215 05:48:16.624151 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-d98xw" Dec 15 05:48:16 crc kubenswrapper[4747]: I1215 05:48:16.654327 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-d98xw" Dec 15 05:48:18 crc kubenswrapper[4747]: I1215 05:48:18.240880 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-ssgvg"] Dec 15 05:48:18 crc kubenswrapper[4747]: I1215 05:48:18.243163 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ssgvg" Dec 15 05:48:18 crc kubenswrapper[4747]: I1215 05:48:18.245963 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 15 05:48:18 crc kubenswrapper[4747]: I1215 05:48:18.246122 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-mkzhp" Dec 15 05:48:18 crc kubenswrapper[4747]: I1215 05:48:18.246283 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 15 05:48:18 crc kubenswrapper[4747]: I1215 05:48:18.251612 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-ssgvg"] Dec 15 05:48:18 crc kubenswrapper[4747]: I1215 05:48:18.327862 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48k8g\" (UniqueName: \"kubernetes.io/projected/600db6fb-c49e-40e5-a195-756c80b40b7d-kube-api-access-48k8g\") pod \"openstack-operator-index-ssgvg\" (UID: \"600db6fb-c49e-40e5-a195-756c80b40b7d\") " pod="openstack-operators/openstack-operator-index-ssgvg" Dec 15 05:48:18 crc kubenswrapper[4747]: I1215 05:48:18.428766 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48k8g\" (UniqueName: \"kubernetes.io/projected/600db6fb-c49e-40e5-a195-756c80b40b7d-kube-api-access-48k8g\") pod \"openstack-operator-index-ssgvg\" (UID: \"600db6fb-c49e-40e5-a195-756c80b40b7d\") " pod="openstack-operators/openstack-operator-index-ssgvg" Dec 15 05:48:18 crc kubenswrapper[4747]: I1215 05:48:18.448631 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48k8g\" (UniqueName: \"kubernetes.io/projected/600db6fb-c49e-40e5-a195-756c80b40b7d-kube-api-access-48k8g\") pod \"openstack-operator-index-ssgvg\" (UID: \"600db6fb-c49e-40e5-a195-756c80b40b7d\") " pod="openstack-operators/openstack-operator-index-ssgvg" Dec 15 05:48:18 crc kubenswrapper[4747]: I1215 05:48:18.559716 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ssgvg" Dec 15 05:48:19 crc kubenswrapper[4747]: I1215 05:48:19.025775 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-ssgvg"] Dec 15 05:48:19 crc kubenswrapper[4747]: I1215 05:48:19.204867 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ssgvg" event={"ID":"600db6fb-c49e-40e5-a195-756c80b40b7d","Type":"ContainerStarted","Data":"a4ab27745f3f214a36669193d0c75551e9bd82aec6a5feed31143ef417423bed"} Dec 15 05:48:21 crc kubenswrapper[4747]: I1215 05:48:21.218146 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ssgvg" event={"ID":"600db6fb-c49e-40e5-a195-756c80b40b7d","Type":"ContainerStarted","Data":"52e2d0d5af71690b5949aa14cf9c2fdf27379448c4b75fc78601825ed1756299"} Dec 15 05:48:21 crc kubenswrapper[4747]: I1215 05:48:21.230313 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-ssgvg" podStartSLOduration=1.906189269 podStartE2EDuration="3.230296066s" podCreationTimestamp="2025-12-15 05:48:18 +0000 UTC" firstStartedPulling="2025-12-15 05:48:19.03736547 +0000 UTC m=+662.733877387" lastFinishedPulling="2025-12-15 05:48:20.361472267 +0000 UTC m=+664.057984184" observedRunningTime="2025-12-15 05:48:21.229577113 +0000 UTC m=+664.926089031" watchObservedRunningTime="2025-12-15 05:48:21.230296066 +0000 UTC m=+664.926807972" Dec 15 05:48:21 crc kubenswrapper[4747]: I1215 05:48:21.625791 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-d98xw" Dec 15 05:48:21 crc kubenswrapper[4747]: I1215 05:48:21.632560 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-szbwq" Dec 15 05:48:28 crc kubenswrapper[4747]: I1215 05:48:28.560684 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-ssgvg" Dec 15 05:48:28 crc kubenswrapper[4747]: I1215 05:48:28.561401 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-ssgvg" Dec 15 05:48:28 crc kubenswrapper[4747]: I1215 05:48:28.589911 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-ssgvg" Dec 15 05:48:29 crc kubenswrapper[4747]: I1215 05:48:29.298621 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-ssgvg" Dec 15 05:48:30 crc kubenswrapper[4747]: I1215 05:48:30.877681 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56fmjvsl"] Dec 15 05:48:30 crc kubenswrapper[4747]: I1215 05:48:30.879132 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56fmjvsl" Dec 15 05:48:30 crc kubenswrapper[4747]: I1215 05:48:30.880998 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2c345b7d-bd2d-43c7-9f3f-906a003a24e5-util\") pod \"ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56fmjvsl\" (UID: \"2c345b7d-bd2d-43c7-9f3f-906a003a24e5\") " pod="openstack-operators/ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56fmjvsl" Dec 15 05:48:30 crc kubenswrapper[4747]: I1215 05:48:30.881070 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2c345b7d-bd2d-43c7-9f3f-906a003a24e5-bundle\") pod \"ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56fmjvsl\" (UID: \"2c345b7d-bd2d-43c7-9f3f-906a003a24e5\") " pod="openstack-operators/ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56fmjvsl" Dec 15 05:48:30 crc kubenswrapper[4747]: I1215 05:48:30.881100 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp6xp\" (UniqueName: \"kubernetes.io/projected/2c345b7d-bd2d-43c7-9f3f-906a003a24e5-kube-api-access-fp6xp\") pod \"ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56fmjvsl\" (UID: \"2c345b7d-bd2d-43c7-9f3f-906a003a24e5\") " pod="openstack-operators/ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56fmjvsl" Dec 15 05:48:30 crc kubenswrapper[4747]: I1215 05:48:30.881554 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-dzsks" Dec 15 05:48:30 crc kubenswrapper[4747]: I1215 05:48:30.887116 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56fmjvsl"] Dec 15 05:48:30 crc kubenswrapper[4747]: I1215 05:48:30.982368 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2c345b7d-bd2d-43c7-9f3f-906a003a24e5-util\") pod \"ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56fmjvsl\" (UID: \"2c345b7d-bd2d-43c7-9f3f-906a003a24e5\") " pod="openstack-operators/ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56fmjvsl" Dec 15 05:48:30 crc kubenswrapper[4747]: I1215 05:48:30.982765 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2c345b7d-bd2d-43c7-9f3f-906a003a24e5-bundle\") pod \"ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56fmjvsl\" (UID: \"2c345b7d-bd2d-43c7-9f3f-906a003a24e5\") " pod="openstack-operators/ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56fmjvsl" Dec 15 05:48:30 crc kubenswrapper[4747]: I1215 05:48:30.982831 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp6xp\" (UniqueName: \"kubernetes.io/projected/2c345b7d-bd2d-43c7-9f3f-906a003a24e5-kube-api-access-fp6xp\") pod \"ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56fmjvsl\" (UID: \"2c345b7d-bd2d-43c7-9f3f-906a003a24e5\") " pod="openstack-operators/ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56fmjvsl" Dec 15 05:48:30 crc kubenswrapper[4747]: I1215 05:48:30.982795 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2c345b7d-bd2d-43c7-9f3f-906a003a24e5-util\") pod \"ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56fmjvsl\" (UID: \"2c345b7d-bd2d-43c7-9f3f-906a003a24e5\") " pod="openstack-operators/ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56fmjvsl" Dec 15 05:48:30 crc kubenswrapper[4747]: I1215 05:48:30.983364 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2c345b7d-bd2d-43c7-9f3f-906a003a24e5-bundle\") pod \"ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56fmjvsl\" (UID: \"2c345b7d-bd2d-43c7-9f3f-906a003a24e5\") " pod="openstack-operators/ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56fmjvsl" Dec 15 05:48:31 crc kubenswrapper[4747]: I1215 05:48:31.001869 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp6xp\" (UniqueName: \"kubernetes.io/projected/2c345b7d-bd2d-43c7-9f3f-906a003a24e5-kube-api-access-fp6xp\") pod \"ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56fmjvsl\" (UID: \"2c345b7d-bd2d-43c7-9f3f-906a003a24e5\") " pod="openstack-operators/ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56fmjvsl" Dec 15 05:48:31 crc kubenswrapper[4747]: I1215 05:48:31.195393 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56fmjvsl" Dec 15 05:48:31 crc kubenswrapper[4747]: I1215 05:48:31.585210 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56fmjvsl"] Dec 15 05:48:32 crc kubenswrapper[4747]: I1215 05:48:32.302279 4747 generic.go:334] "Generic (PLEG): container finished" podID="2c345b7d-bd2d-43c7-9f3f-906a003a24e5" containerID="b63d33f7734b24ea3454f6025491a572a4505c14b1aaaa9ab5358baebf706bfe" exitCode=0 Dec 15 05:48:32 crc kubenswrapper[4747]: I1215 05:48:32.302390 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56fmjvsl" event={"ID":"2c345b7d-bd2d-43c7-9f3f-906a003a24e5","Type":"ContainerDied","Data":"b63d33f7734b24ea3454f6025491a572a4505c14b1aaaa9ab5358baebf706bfe"} Dec 15 05:48:32 crc kubenswrapper[4747]: I1215 05:48:32.302748 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56fmjvsl" event={"ID":"2c345b7d-bd2d-43c7-9f3f-906a003a24e5","Type":"ContainerStarted","Data":"1f8e1358a4bbe5386a98c11604313d9a5cabd3daebcb3e55487b47c7a43e61a2"} Dec 15 05:48:34 crc kubenswrapper[4747]: I1215 05:48:34.317228 4747 generic.go:334] "Generic (PLEG): container finished" podID="2c345b7d-bd2d-43c7-9f3f-906a003a24e5" containerID="0ffa5e75eb46c9f8f360817786789614f23d7c47de8230e9b86ba7393fdb792f" exitCode=0 Dec 15 05:48:34 crc kubenswrapper[4747]: I1215 05:48:34.317347 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56fmjvsl" event={"ID":"2c345b7d-bd2d-43c7-9f3f-906a003a24e5","Type":"ContainerDied","Data":"0ffa5e75eb46c9f8f360817786789614f23d7c47de8230e9b86ba7393fdb792f"} Dec 15 05:48:35 crc kubenswrapper[4747]: I1215 05:48:35.325444 4747 generic.go:334] "Generic (PLEG): container finished" podID="2c345b7d-bd2d-43c7-9f3f-906a003a24e5" containerID="2a8e1b42d2b1894e8f74207130adeb12628f869552a23ecf4f428fe6fdfe87a8" exitCode=0 Dec 15 05:48:35 crc kubenswrapper[4747]: I1215 05:48:35.325490 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56fmjvsl" event={"ID":"2c345b7d-bd2d-43c7-9f3f-906a003a24e5","Type":"ContainerDied","Data":"2a8e1b42d2b1894e8f74207130adeb12628f869552a23ecf4f428fe6fdfe87a8"} Dec 15 05:48:36 crc kubenswrapper[4747]: I1215 05:48:36.550056 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56fmjvsl" Dec 15 05:48:36 crc kubenswrapper[4747]: I1215 05:48:36.652680 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2c345b7d-bd2d-43c7-9f3f-906a003a24e5-bundle\") pod \"2c345b7d-bd2d-43c7-9f3f-906a003a24e5\" (UID: \"2c345b7d-bd2d-43c7-9f3f-906a003a24e5\") " Dec 15 05:48:36 crc kubenswrapper[4747]: I1215 05:48:36.653510 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fp6xp\" (UniqueName: \"kubernetes.io/projected/2c345b7d-bd2d-43c7-9f3f-906a003a24e5-kube-api-access-fp6xp\") pod \"2c345b7d-bd2d-43c7-9f3f-906a003a24e5\" (UID: \"2c345b7d-bd2d-43c7-9f3f-906a003a24e5\") " Dec 15 05:48:36 crc kubenswrapper[4747]: I1215 05:48:36.653648 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c345b7d-bd2d-43c7-9f3f-906a003a24e5-bundle" (OuterVolumeSpecName: "bundle") pod "2c345b7d-bd2d-43c7-9f3f-906a003a24e5" (UID: "2c345b7d-bd2d-43c7-9f3f-906a003a24e5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:48:36 crc kubenswrapper[4747]: I1215 05:48:36.654491 4747 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2c345b7d-bd2d-43c7-9f3f-906a003a24e5-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:48:36 crc kubenswrapper[4747]: I1215 05:48:36.667100 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c345b7d-bd2d-43c7-9f3f-906a003a24e5-kube-api-access-fp6xp" (OuterVolumeSpecName: "kube-api-access-fp6xp") pod "2c345b7d-bd2d-43c7-9f3f-906a003a24e5" (UID: "2c345b7d-bd2d-43c7-9f3f-906a003a24e5"). InnerVolumeSpecName "kube-api-access-fp6xp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:48:36 crc kubenswrapper[4747]: I1215 05:48:36.756602 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2c345b7d-bd2d-43c7-9f3f-906a003a24e5-util\") pod \"2c345b7d-bd2d-43c7-9f3f-906a003a24e5\" (UID: \"2c345b7d-bd2d-43c7-9f3f-906a003a24e5\") " Dec 15 05:48:36 crc kubenswrapper[4747]: I1215 05:48:36.757066 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fp6xp\" (UniqueName: \"kubernetes.io/projected/2c345b7d-bd2d-43c7-9f3f-906a003a24e5-kube-api-access-fp6xp\") on node \"crc\" DevicePath \"\"" Dec 15 05:48:36 crc kubenswrapper[4747]: I1215 05:48:36.766969 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c345b7d-bd2d-43c7-9f3f-906a003a24e5-util" (OuterVolumeSpecName: "util") pod "2c345b7d-bd2d-43c7-9f3f-906a003a24e5" (UID: "2c345b7d-bd2d-43c7-9f3f-906a003a24e5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:48:36 crc kubenswrapper[4747]: I1215 05:48:36.859062 4747 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2c345b7d-bd2d-43c7-9f3f-906a003a24e5-util\") on node \"crc\" DevicePath \"\"" Dec 15 05:48:37 crc kubenswrapper[4747]: I1215 05:48:37.339506 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56fmjvsl" event={"ID":"2c345b7d-bd2d-43c7-9f3f-906a003a24e5","Type":"ContainerDied","Data":"1f8e1358a4bbe5386a98c11604313d9a5cabd3daebcb3e55487b47c7a43e61a2"} Dec 15 05:48:37 crc kubenswrapper[4747]: I1215 05:48:37.339561 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f8e1358a4bbe5386a98c11604313d9a5cabd3daebcb3e55487b47c7a43e61a2" Dec 15 05:48:37 crc kubenswrapper[4747]: I1215 05:48:37.339997 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56fmjvsl" Dec 15 05:48:39 crc kubenswrapper[4747]: I1215 05:48:39.195330 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-57bbbf4567-4l6vr"] Dec 15 05:48:39 crc kubenswrapper[4747]: E1215 05:48:39.195577 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c345b7d-bd2d-43c7-9f3f-906a003a24e5" containerName="pull" Dec 15 05:48:39 crc kubenswrapper[4747]: I1215 05:48:39.195589 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c345b7d-bd2d-43c7-9f3f-906a003a24e5" containerName="pull" Dec 15 05:48:39 crc kubenswrapper[4747]: E1215 05:48:39.195598 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c345b7d-bd2d-43c7-9f3f-906a003a24e5" containerName="extract" Dec 15 05:48:39 crc kubenswrapper[4747]: I1215 05:48:39.195603 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c345b7d-bd2d-43c7-9f3f-906a003a24e5" containerName="extract" Dec 15 05:48:39 crc kubenswrapper[4747]: E1215 05:48:39.195613 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c345b7d-bd2d-43c7-9f3f-906a003a24e5" containerName="util" Dec 15 05:48:39 crc kubenswrapper[4747]: I1215 05:48:39.195618 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c345b7d-bd2d-43c7-9f3f-906a003a24e5" containerName="util" Dec 15 05:48:39 crc kubenswrapper[4747]: I1215 05:48:39.195728 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c345b7d-bd2d-43c7-9f3f-906a003a24e5" containerName="extract" Dec 15 05:48:39 crc kubenswrapper[4747]: I1215 05:48:39.196123 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-57bbbf4567-4l6vr" Dec 15 05:48:39 crc kubenswrapper[4747]: I1215 05:48:39.197619 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-txc8g" Dec 15 05:48:39 crc kubenswrapper[4747]: I1215 05:48:39.225972 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-57bbbf4567-4l6vr"] Dec 15 05:48:39 crc kubenswrapper[4747]: I1215 05:48:39.392180 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwcdv\" (UniqueName: \"kubernetes.io/projected/e1d8f4a6-dd71-427f-98ac-5e77cc0fb1ae-kube-api-access-dwcdv\") pod \"openstack-operator-controller-operator-57bbbf4567-4l6vr\" (UID: \"e1d8f4a6-dd71-427f-98ac-5e77cc0fb1ae\") " pod="openstack-operators/openstack-operator-controller-operator-57bbbf4567-4l6vr" Dec 15 05:48:39 crc kubenswrapper[4747]: I1215 05:48:39.493277 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwcdv\" (UniqueName: \"kubernetes.io/projected/e1d8f4a6-dd71-427f-98ac-5e77cc0fb1ae-kube-api-access-dwcdv\") pod \"openstack-operator-controller-operator-57bbbf4567-4l6vr\" (UID: \"e1d8f4a6-dd71-427f-98ac-5e77cc0fb1ae\") " pod="openstack-operators/openstack-operator-controller-operator-57bbbf4567-4l6vr" Dec 15 05:48:39 crc kubenswrapper[4747]: I1215 05:48:39.512737 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwcdv\" (UniqueName: \"kubernetes.io/projected/e1d8f4a6-dd71-427f-98ac-5e77cc0fb1ae-kube-api-access-dwcdv\") pod \"openstack-operator-controller-operator-57bbbf4567-4l6vr\" (UID: \"e1d8f4a6-dd71-427f-98ac-5e77cc0fb1ae\") " pod="openstack-operators/openstack-operator-controller-operator-57bbbf4567-4l6vr" Dec 15 05:48:39 crc kubenswrapper[4747]: I1215 05:48:39.810644 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-57bbbf4567-4l6vr" Dec 15 05:48:40 crc kubenswrapper[4747]: I1215 05:48:40.188362 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-57bbbf4567-4l6vr"] Dec 15 05:48:40 crc kubenswrapper[4747]: I1215 05:48:40.356843 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-57bbbf4567-4l6vr" event={"ID":"e1d8f4a6-dd71-427f-98ac-5e77cc0fb1ae","Type":"ContainerStarted","Data":"a67caed91da0be0bf1b7c3fa91b5267c4d457bf4b44e03e6e5c34d3081628d8d"} Dec 15 05:48:47 crc kubenswrapper[4747]: I1215 05:48:47.413255 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-57bbbf4567-4l6vr" event={"ID":"e1d8f4a6-dd71-427f-98ac-5e77cc0fb1ae","Type":"ContainerStarted","Data":"83b1889badf10c03dff738d3db911b7d5e32740cd52e6b859e5480600c39df15"} Dec 15 05:48:47 crc kubenswrapper[4747]: I1215 05:48:47.414068 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-57bbbf4567-4l6vr" Dec 15 05:48:47 crc kubenswrapper[4747]: I1215 05:48:47.442321 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-57bbbf4567-4l6vr" podStartSLOduration=2.116594782 podStartE2EDuration="8.442292215s" podCreationTimestamp="2025-12-15 05:48:39 +0000 UTC" firstStartedPulling="2025-12-15 05:48:40.207493569 +0000 UTC m=+683.904005486" lastFinishedPulling="2025-12-15 05:48:46.533191001 +0000 UTC m=+690.229702919" observedRunningTime="2025-12-15 05:48:47.438974424 +0000 UTC m=+691.135486341" watchObservedRunningTime="2025-12-15 05:48:47.442292215 +0000 UTC m=+691.138804132" Dec 15 05:48:59 crc kubenswrapper[4747]: I1215 05:48:59.813704 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-57bbbf4567-4l6vr" Dec 15 05:49:17 crc kubenswrapper[4747]: I1215 05:49:17.801782 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-95949466-pzsnr"] Dec 15 05:49:17 crc kubenswrapper[4747]: I1215 05:49:17.803065 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-95949466-pzsnr" Dec 15 05:49:17 crc kubenswrapper[4747]: I1215 05:49:17.805039 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-cmfkk" Dec 15 05:49:17 crc kubenswrapper[4747]: I1215 05:49:17.809895 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x597t\" (UniqueName: \"kubernetes.io/projected/966a3797-97c2-4e8d-8799-6b8a287efd78-kube-api-access-x597t\") pod \"barbican-operator-controller-manager-95949466-pzsnr\" (UID: \"966a3797-97c2-4e8d-8799-6b8a287efd78\") " pod="openstack-operators/barbican-operator-controller-manager-95949466-pzsnr" Dec 15 05:49:17 crc kubenswrapper[4747]: I1215 05:49:17.810803 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5cf45c46bd-ggkl6"] Dec 15 05:49:17 crc kubenswrapper[4747]: I1215 05:49:17.811591 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5cf45c46bd-ggkl6" Dec 15 05:49:17 crc kubenswrapper[4747]: I1215 05:49:17.814652 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-jptfk" Dec 15 05:49:17 crc kubenswrapper[4747]: I1215 05:49:17.817063 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-95949466-pzsnr"] Dec 15 05:49:17 crc kubenswrapper[4747]: I1215 05:49:17.821636 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-66f8b87655-rkqrw"] Dec 15 05:49:17 crc kubenswrapper[4747]: I1215 05:49:17.822858 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-rkqrw" Dec 15 05:49:17 crc kubenswrapper[4747]: I1215 05:49:17.824955 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-f95t9" Dec 15 05:49:17 crc kubenswrapper[4747]: I1215 05:49:17.826705 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5cf45c46bd-ggkl6"] Dec 15 05:49:17 crc kubenswrapper[4747]: I1215 05:49:17.832258 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66f8b87655-rkqrw"] Dec 15 05:49:17 crc kubenswrapper[4747]: I1215 05:49:17.852298 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-767f9d7567-hk9c4"] Dec 15 05:49:17 crc kubenswrapper[4747]: I1215 05:49:17.853198 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-767f9d7567-hk9c4" Dec 15 05:49:17 crc kubenswrapper[4747]: I1215 05:49:17.856214 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-767f9d7567-hk9c4"] Dec 15 05:49:17 crc kubenswrapper[4747]: I1215 05:49:17.857240 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-mks78" Dec 15 05:49:17 crc kubenswrapper[4747]: I1215 05:49:17.866866 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-59b8dcb766-tcs4c"] Dec 15 05:49:17 crc kubenswrapper[4747]: I1215 05:49:17.872120 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-59b8dcb766-tcs4c" Dec 15 05:49:17 crc kubenswrapper[4747]: I1215 05:49:17.874394 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-dq2r6" Dec 15 05:49:17 crc kubenswrapper[4747]: I1215 05:49:17.876511 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-59b8dcb766-tcs4c"] Dec 15 05:49:17 crc kubenswrapper[4747]: I1215 05:49:17.883483 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6ccf486b9-cmgcn"] Dec 15 05:49:17 crc kubenswrapper[4747]: I1215 05:49:17.884270 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6ccf486b9-cmgcn"] Dec 15 05:49:17 crc kubenswrapper[4747]: I1215 05:49:17.884360 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6ccf486b9-cmgcn" Dec 15 05:49:17 crc kubenswrapper[4747]: I1215 05:49:17.887474 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-4gqrk" Dec 15 05:49:17 crc kubenswrapper[4747]: I1215 05:49:17.912317 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-58944d7758-s79wq"] Dec 15 05:49:17 crc kubenswrapper[4747]: I1215 05:49:17.913336 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-58944d7758-s79wq" Dec 15 05:49:17 crc kubenswrapper[4747]: I1215 05:49:17.913606 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x597t\" (UniqueName: \"kubernetes.io/projected/966a3797-97c2-4e8d-8799-6b8a287efd78-kube-api-access-x597t\") pod \"barbican-operator-controller-manager-95949466-pzsnr\" (UID: \"966a3797-97c2-4e8d-8799-6b8a287efd78\") " pod="openstack-operators/barbican-operator-controller-manager-95949466-pzsnr" Dec 15 05:49:17 crc kubenswrapper[4747]: I1215 05:49:17.921016 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 15 05:49:17 crc kubenswrapper[4747]: I1215 05:49:17.921230 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-r7sbb" Dec 15 05:49:17 crc kubenswrapper[4747]: I1215 05:49:17.927156 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-58944d7758-s79wq"] Dec 15 05:49:17 crc kubenswrapper[4747]: I1215 05:49:17.932473 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-f458558d7-vf58x"] Dec 15 05:49:17 crc kubenswrapper[4747]: I1215 05:49:17.933370 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-f458558d7-vf58x" Dec 15 05:49:17 crc kubenswrapper[4747]: I1215 05:49:17.935977 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-5c7cbf548f-v7cjm"] Dec 15 05:49:17 crc kubenswrapper[4747]: I1215 05:49:17.936803 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-5c7cbf548f-v7cjm" Dec 15 05:49:17 crc kubenswrapper[4747]: I1215 05:49:17.937663 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-jcv62" Dec 15 05:49:17 crc kubenswrapper[4747]: I1215 05:49:17.937947 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-k5fk4" Dec 15 05:49:17 crc kubenswrapper[4747]: I1215 05:49:17.943525 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x597t\" (UniqueName: \"kubernetes.io/projected/966a3797-97c2-4e8d-8799-6b8a287efd78-kube-api-access-x597t\") pod \"barbican-operator-controller-manager-95949466-pzsnr\" (UID: \"966a3797-97c2-4e8d-8799-6b8a287efd78\") " pod="openstack-operators/barbican-operator-controller-manager-95949466-pzsnr" Dec 15 05:49:17 crc kubenswrapper[4747]: I1215 05:49:17.961863 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-f458558d7-vf58x"] Dec 15 05:49:17 crc kubenswrapper[4747]: I1215 05:49:17.971306 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-5c7cbf548f-v7cjm"] Dec 15 05:49:17 crc kubenswrapper[4747]: I1215 05:49:17.976564 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-5fdd9786f7-dg8cj"] Dec 15 05:49:17 crc kubenswrapper[4747]: I1215 05:49:17.977458 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5fdd9786f7-dg8cj" Dec 15 05:49:17 crc kubenswrapper[4747]: I1215 05:49:17.980474 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-mvgjw" Dec 15 05:49:17 crc kubenswrapper[4747]: I1215 05:49:17.997311 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5fdd9786f7-dg8cj"] Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.008041 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7cd87b778f-qw6tr"] Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.009024 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-qw6tr" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.011466 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-f76f4954c-5chln"] Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.012147 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-sdx96" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.012371 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-f76f4954c-5chln" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.014535 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mfv9\" (UniqueName: \"kubernetes.io/projected/50d161a9-2162-4642-bfd4-74bde1129134-kube-api-access-5mfv9\") pod \"cinder-operator-controller-manager-5cf45c46bd-ggkl6\" (UID: \"50d161a9-2162-4642-bfd4-74bde1129134\") " pod="openstack-operators/cinder-operator-controller-manager-5cf45c46bd-ggkl6" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.014627 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb8f1731-54b2-4d71-96fb-13fde067045b-cert\") pod \"infra-operator-controller-manager-58944d7758-s79wq\" (UID: \"bb8f1731-54b2-4d71-96fb-13fde067045b\") " pod="openstack-operators/infra-operator-controller-manager-58944d7758-s79wq" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.014660 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wpfz\" (UniqueName: \"kubernetes.io/projected/bb8f1731-54b2-4d71-96fb-13fde067045b-kube-api-access-7wpfz\") pod \"infra-operator-controller-manager-58944d7758-s79wq\" (UID: \"bb8f1731-54b2-4d71-96fb-13fde067045b\") " pod="openstack-operators/infra-operator-controller-manager-58944d7758-s79wq" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.014684 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67j7x\" (UniqueName: \"kubernetes.io/projected/07926291-631c-415d-8aaa-c425852decd9-kube-api-access-67j7x\") pod \"heat-operator-controller-manager-59b8dcb766-tcs4c\" (UID: \"07926291-631c-415d-8aaa-c425852decd9\") " pod="openstack-operators/heat-operator-controller-manager-59b8dcb766-tcs4c" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.014698 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvrv6\" (UniqueName: \"kubernetes.io/projected/a9d4c90d-ecd6-4126-8d91-dfb784a64d54-kube-api-access-pvrv6\") pod \"designate-operator-controller-manager-66f8b87655-rkqrw\" (UID: \"a9d4c90d-ecd6-4126-8d91-dfb784a64d54\") " pod="openstack-operators/designate-operator-controller-manager-66f8b87655-rkqrw" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.014726 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2jwh\" (UniqueName: \"kubernetes.io/projected/ed7a99f7-83b8-48f4-9cc9-135af2e16529-kube-api-access-c2jwh\") pod \"glance-operator-controller-manager-767f9d7567-hk9c4\" (UID: \"ed7a99f7-83b8-48f4-9cc9-135af2e16529\") " pod="openstack-operators/glance-operator-controller-manager-767f9d7567-hk9c4" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.014797 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfmmm\" (UniqueName: \"kubernetes.io/projected/c8a35ff2-385b-46d4-95e6-d7e85a7c8477-kube-api-access-pfmmm\") pod \"horizon-operator-controller-manager-6ccf486b9-cmgcn\" (UID: \"c8a35ff2-385b-46d4-95e6-d7e85a7c8477\") " pod="openstack-operators/horizon-operator-controller-manager-6ccf486b9-cmgcn" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.018243 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-9zrqv" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.025561 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7cd87b778f-qw6tr"] Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.039817 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-f76f4954c-5chln"] Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.076207 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-snvkz"] Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.077899 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-snvkz" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.080790 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-54n5w" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.083519 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-68c649d9d-sffcl"] Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.085418 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-sffcl" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.087029 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-c8fcj" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.091980 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-snvkz"] Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.110616 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-68c649d9d-sffcl"] Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.117233 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrcs9\" (UniqueName: \"kubernetes.io/projected/5f14ea23-34de-4d4b-971d-dc90d34c44a9-kube-api-access-wrcs9\") pod \"neutron-operator-controller-manager-7cd87b778f-qw6tr\" (UID: \"5f14ea23-34de-4d4b-971d-dc90d34c44a9\") " pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-qw6tr" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.117294 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9cdx\" (UniqueName: \"kubernetes.io/projected/fdda9bcd-0316-4549-af8b-ae0e151e59d7-kube-api-access-p9cdx\") pod \"keystone-operator-controller-manager-5c7cbf548f-v7cjm\" (UID: \"fdda9bcd-0316-4549-af8b-ae0e151e59d7\") " pod="openstack-operators/keystone-operator-controller-manager-5c7cbf548f-v7cjm" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.117335 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mfv9\" (UniqueName: \"kubernetes.io/projected/50d161a9-2162-4642-bfd4-74bde1129134-kube-api-access-5mfv9\") pod \"cinder-operator-controller-manager-5cf45c46bd-ggkl6\" (UID: \"50d161a9-2162-4642-bfd4-74bde1129134\") " pod="openstack-operators/cinder-operator-controller-manager-5cf45c46bd-ggkl6" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.117384 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pnhz\" (UniqueName: \"kubernetes.io/projected/b93e01ce-98e3-4941-8721-d9ce67414730-kube-api-access-9pnhz\") pod \"ironic-operator-controller-manager-f458558d7-vf58x\" (UID: \"b93e01ce-98e3-4941-8721-d9ce67414730\") " pod="openstack-operators/ironic-operator-controller-manager-f458558d7-vf58x" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.117458 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24hnv\" (UniqueName: \"kubernetes.io/projected/e6558c12-d59f-4593-9605-a7dc6c19e766-kube-api-access-24hnv\") pod \"manila-operator-controller-manager-5fdd9786f7-dg8cj\" (UID: \"e6558c12-d59f-4593-9605-a7dc6c19e766\") " pod="openstack-operators/manila-operator-controller-manager-5fdd9786f7-dg8cj" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.117502 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb8f1731-54b2-4d71-96fb-13fde067045b-cert\") pod \"infra-operator-controller-manager-58944d7758-s79wq\" (UID: \"bb8f1731-54b2-4d71-96fb-13fde067045b\") " pod="openstack-operators/infra-operator-controller-manager-58944d7758-s79wq" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.117533 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wpfz\" (UniqueName: \"kubernetes.io/projected/bb8f1731-54b2-4d71-96fb-13fde067045b-kube-api-access-7wpfz\") pod \"infra-operator-controller-manager-58944d7758-s79wq\" (UID: \"bb8f1731-54b2-4d71-96fb-13fde067045b\") " pod="openstack-operators/infra-operator-controller-manager-58944d7758-s79wq" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.117655 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67j7x\" (UniqueName: \"kubernetes.io/projected/07926291-631c-415d-8aaa-c425852decd9-kube-api-access-67j7x\") pod \"heat-operator-controller-manager-59b8dcb766-tcs4c\" (UID: \"07926291-631c-415d-8aaa-c425852decd9\") " pod="openstack-operators/heat-operator-controller-manager-59b8dcb766-tcs4c" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.117760 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jqkj\" (UniqueName: \"kubernetes.io/projected/dc8104ce-563e-4e6f-b61d-18e2bdc49879-kube-api-access-6jqkj\") pod \"mariadb-operator-controller-manager-f76f4954c-5chln\" (UID: \"dc8104ce-563e-4e6f-b61d-18e2bdc49879\") " pod="openstack-operators/mariadb-operator-controller-manager-f76f4954c-5chln" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.117790 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvrv6\" (UniqueName: \"kubernetes.io/projected/a9d4c90d-ecd6-4126-8d91-dfb784a64d54-kube-api-access-pvrv6\") pod \"designate-operator-controller-manager-66f8b87655-rkqrw\" (UID: \"a9d4c90d-ecd6-4126-8d91-dfb784a64d54\") " pod="openstack-operators/designate-operator-controller-manager-66f8b87655-rkqrw" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.117820 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2jwh\" (UniqueName: \"kubernetes.io/projected/ed7a99f7-83b8-48f4-9cc9-135af2e16529-kube-api-access-c2jwh\") pod \"glance-operator-controller-manager-767f9d7567-hk9c4\" (UID: \"ed7a99f7-83b8-48f4-9cc9-135af2e16529\") " pod="openstack-operators/glance-operator-controller-manager-767f9d7567-hk9c4" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.117883 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfmmm\" (UniqueName: \"kubernetes.io/projected/c8a35ff2-385b-46d4-95e6-d7e85a7c8477-kube-api-access-pfmmm\") pod \"horizon-operator-controller-manager-6ccf486b9-cmgcn\" (UID: \"c8a35ff2-385b-46d4-95e6-d7e85a7c8477\") " pod="openstack-operators/horizon-operator-controller-manager-6ccf486b9-cmgcn" Dec 15 05:49:18 crc kubenswrapper[4747]: E1215 05:49:18.118504 4747 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 15 05:49:18 crc kubenswrapper[4747]: E1215 05:49:18.118556 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb8f1731-54b2-4d71-96fb-13fde067045b-cert podName:bb8f1731-54b2-4d71-96fb-13fde067045b nodeName:}" failed. No retries permitted until 2025-12-15 05:49:18.618538204 +0000 UTC m=+722.315050121 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bb8f1731-54b2-4d71-96fb-13fde067045b-cert") pod "infra-operator-controller-manager-58944d7758-s79wq" (UID: "bb8f1731-54b2-4d71-96fb-13fde067045b") : secret "infra-operator-webhook-server-cert" not found Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.120493 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-689f887b54sfqvx"] Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.121333 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-689f887b54sfqvx" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.121695 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-95949466-pzsnr" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.123392 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.123623 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-krlp8" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.136960 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfmmm\" (UniqueName: \"kubernetes.io/projected/c8a35ff2-385b-46d4-95e6-d7e85a7c8477-kube-api-access-pfmmm\") pod \"horizon-operator-controller-manager-6ccf486b9-cmgcn\" (UID: \"c8a35ff2-385b-46d4-95e6-d7e85a7c8477\") " pod="openstack-operators/horizon-operator-controller-manager-6ccf486b9-cmgcn" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.138898 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvrv6\" (UniqueName: \"kubernetes.io/projected/a9d4c90d-ecd6-4126-8d91-dfb784a64d54-kube-api-access-pvrv6\") pod \"designate-operator-controller-manager-66f8b87655-rkqrw\" (UID: \"a9d4c90d-ecd6-4126-8d91-dfb784a64d54\") " pod="openstack-operators/designate-operator-controller-manager-66f8b87655-rkqrw" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.140799 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mfv9\" (UniqueName: \"kubernetes.io/projected/50d161a9-2162-4642-bfd4-74bde1129134-kube-api-access-5mfv9\") pod \"cinder-operator-controller-manager-5cf45c46bd-ggkl6\" (UID: \"50d161a9-2162-4642-bfd4-74bde1129134\") " pod="openstack-operators/cinder-operator-controller-manager-5cf45c46bd-ggkl6" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.141894 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wpfz\" (UniqueName: \"kubernetes.io/projected/bb8f1731-54b2-4d71-96fb-13fde067045b-kube-api-access-7wpfz\") pod \"infra-operator-controller-manager-58944d7758-s79wq\" (UID: \"bb8f1731-54b2-4d71-96fb-13fde067045b\") " pod="openstack-operators/infra-operator-controller-manager-58944d7758-s79wq" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.142027 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-jmxtj"] Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.143183 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jmxtj" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.143606 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2jwh\" (UniqueName: \"kubernetes.io/projected/ed7a99f7-83b8-48f4-9cc9-135af2e16529-kube-api-access-c2jwh\") pod \"glance-operator-controller-manager-767f9d7567-hk9c4\" (UID: \"ed7a99f7-83b8-48f4-9cc9-135af2e16529\") " pod="openstack-operators/glance-operator-controller-manager-767f9d7567-hk9c4" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.144069 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-rkqrw" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.146127 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-jmxtj"] Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.154330 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-4s8pq" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.154339 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8665b56d78-c2gjc"] Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.155277 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-689f887b54sfqvx"] Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.155348 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8665b56d78-c2gjc" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.156410 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-l6lln" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.157861 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8665b56d78-c2gjc"] Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.163893 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67j7x\" (UniqueName: \"kubernetes.io/projected/07926291-631c-415d-8aaa-c425852decd9-kube-api-access-67j7x\") pod \"heat-operator-controller-manager-59b8dcb766-tcs4c\" (UID: \"07926291-631c-415d-8aaa-c425852decd9\") " pod="openstack-operators/heat-operator-controller-manager-59b8dcb766-tcs4c" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.168293 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-5c6df8f9-tm9tq"] Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.169250 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5c6df8f9-tm9tq" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.170547 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5c6df8f9-tm9tq"] Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.171050 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-767f9d7567-hk9c4" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.171727 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-rwmt6" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.190113 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-59b8dcb766-tcs4c" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.206690 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6ccf486b9-cmgcn" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.219019 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pnhz\" (UniqueName: \"kubernetes.io/projected/b93e01ce-98e3-4941-8721-d9ce67414730-kube-api-access-9pnhz\") pod \"ironic-operator-controller-manager-f458558d7-vf58x\" (UID: \"b93e01ce-98e3-4941-8721-d9ce67414730\") " pod="openstack-operators/ironic-operator-controller-manager-f458558d7-vf58x" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.219083 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24hnv\" (UniqueName: \"kubernetes.io/projected/e6558c12-d59f-4593-9605-a7dc6c19e766-kube-api-access-24hnv\") pod \"manila-operator-controller-manager-5fdd9786f7-dg8cj\" (UID: \"e6558c12-d59f-4593-9605-a7dc6c19e766\") " pod="openstack-operators/manila-operator-controller-manager-5fdd9786f7-dg8cj" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.219143 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jqkj\" (UniqueName: \"kubernetes.io/projected/dc8104ce-563e-4e6f-b61d-18e2bdc49879-kube-api-access-6jqkj\") pod \"mariadb-operator-controller-manager-f76f4954c-5chln\" (UID: \"dc8104ce-563e-4e6f-b61d-18e2bdc49879\") " pod="openstack-operators/mariadb-operator-controller-manager-f76f4954c-5chln" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.219180 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blbwj\" (UniqueName: \"kubernetes.io/projected/5a07861b-82a4-47c3-8255-3b76b44da9d6-kube-api-access-blbwj\") pod \"octavia-operator-controller-manager-68c649d9d-sffcl\" (UID: \"5a07861b-82a4-47c3-8255-3b76b44da9d6\") " pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-sffcl" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.219259 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrcs9\" (UniqueName: \"kubernetes.io/projected/5f14ea23-34de-4d4b-971d-dc90d34c44a9-kube-api-access-wrcs9\") pod \"neutron-operator-controller-manager-7cd87b778f-qw6tr\" (UID: \"5f14ea23-34de-4d4b-971d-dc90d34c44a9\") " pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-qw6tr" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.219283 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9cdx\" (UniqueName: \"kubernetes.io/projected/fdda9bcd-0316-4549-af8b-ae0e151e59d7-kube-api-access-p9cdx\") pod \"keystone-operator-controller-manager-5c7cbf548f-v7cjm\" (UID: \"fdda9bcd-0316-4549-af8b-ae0e151e59d7\") " pod="openstack-operators/keystone-operator-controller-manager-5c7cbf548f-v7cjm" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.219303 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpp5g\" (UniqueName: \"kubernetes.io/projected/60924e24-00f9-4f6a-bf7e-385f8e54a027-kube-api-access-rpp5g\") pod \"nova-operator-controller-manager-5fbbf8b6cc-snvkz\" (UID: \"60924e24-00f9-4f6a-bf7e-385f8e54a027\") " pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-snvkz" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.248236 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pnhz\" (UniqueName: \"kubernetes.io/projected/b93e01ce-98e3-4941-8721-d9ce67414730-kube-api-access-9pnhz\") pod \"ironic-operator-controller-manager-f458558d7-vf58x\" (UID: \"b93e01ce-98e3-4941-8721-d9ce67414730\") " pod="openstack-operators/ironic-operator-controller-manager-f458558d7-vf58x" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.248310 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-97d456b9-gqlwk"] Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.249615 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-97d456b9-gqlwk" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.250843 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24hnv\" (UniqueName: \"kubernetes.io/projected/e6558c12-d59f-4593-9605-a7dc6c19e766-kube-api-access-24hnv\") pod \"manila-operator-controller-manager-5fdd9786f7-dg8cj\" (UID: \"e6558c12-d59f-4593-9605-a7dc6c19e766\") " pod="openstack-operators/manila-operator-controller-manager-5fdd9786f7-dg8cj" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.251499 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrcs9\" (UniqueName: \"kubernetes.io/projected/5f14ea23-34de-4d4b-971d-dc90d34c44a9-kube-api-access-wrcs9\") pod \"neutron-operator-controller-manager-7cd87b778f-qw6tr\" (UID: \"5f14ea23-34de-4d4b-971d-dc90d34c44a9\") " pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-qw6tr" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.253733 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-4c7cb" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.254512 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9cdx\" (UniqueName: \"kubernetes.io/projected/fdda9bcd-0316-4549-af8b-ae0e151e59d7-kube-api-access-p9cdx\") pod \"keystone-operator-controller-manager-5c7cbf548f-v7cjm\" (UID: \"fdda9bcd-0316-4549-af8b-ae0e151e59d7\") " pod="openstack-operators/keystone-operator-controller-manager-5c7cbf548f-v7cjm" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.256216 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jqkj\" (UniqueName: \"kubernetes.io/projected/dc8104ce-563e-4e6f-b61d-18e2bdc49879-kube-api-access-6jqkj\") pod \"mariadb-operator-controller-manager-f76f4954c-5chln\" (UID: \"dc8104ce-563e-4e6f-b61d-18e2bdc49879\") " pod="openstack-operators/mariadb-operator-controller-manager-f76f4954c-5chln" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.260603 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-97d456b9-gqlwk"] Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.268692 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-f458558d7-vf58x" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.276692 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-5c7cbf548f-v7cjm" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.292012 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5fdd9786f7-dg8cj" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.311383 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-756ccf86c7-6dlgk"] Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.313340 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-756ccf86c7-6dlgk" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.320917 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9td22\" (UniqueName: \"kubernetes.io/projected/c1d38621-ff5b-4d92-8457-9568c6b67416-kube-api-access-9td22\") pod \"placement-operator-controller-manager-8665b56d78-c2gjc\" (UID: \"c1d38621-ff5b-4d92-8457-9568c6b67416\") " pod="openstack-operators/placement-operator-controller-manager-8665b56d78-c2gjc" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.321067 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3858e881-df69-47eb-8a78-fa48f7ca7f87-cert\") pod \"openstack-baremetal-operator-controller-manager-689f887b54sfqvx\" (UID: \"3858e881-df69-47eb-8a78-fa48f7ca7f87\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-689f887b54sfqvx" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.321118 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blbwj\" (UniqueName: \"kubernetes.io/projected/5a07861b-82a4-47c3-8255-3b76b44da9d6-kube-api-access-blbwj\") pod \"octavia-operator-controller-manager-68c649d9d-sffcl\" (UID: \"5a07861b-82a4-47c3-8255-3b76b44da9d6\") " pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-sffcl" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.321174 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhnmw\" (UniqueName: \"kubernetes.io/projected/3858e881-df69-47eb-8a78-fa48f7ca7f87-kube-api-access-nhnmw\") pod \"openstack-baremetal-operator-controller-manager-689f887b54sfqvx\" (UID: \"3858e881-df69-47eb-8a78-fa48f7ca7f87\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-689f887b54sfqvx" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.321208 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4rl2\" (UniqueName: \"kubernetes.io/projected/4e1be8a6-df60-418b-911f-efbf8aa5cf5a-kube-api-access-c4rl2\") pod \"swift-operator-controller-manager-5c6df8f9-tm9tq\" (UID: \"4e1be8a6-df60-418b-911f-efbf8aa5cf5a\") " pod="openstack-operators/swift-operator-controller-manager-5c6df8f9-tm9tq" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.321248 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpp5g\" (UniqueName: \"kubernetes.io/projected/60924e24-00f9-4f6a-bf7e-385f8e54a027-kube-api-access-rpp5g\") pod \"nova-operator-controller-manager-5fbbf8b6cc-snvkz\" (UID: \"60924e24-00f9-4f6a-bf7e-385f8e54a027\") " pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-snvkz" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.321266 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwnhp\" (UniqueName: \"kubernetes.io/projected/e1cafba6-81fa-4f70-b79d-4d02cdd194a3-kube-api-access-xwnhp\") pod \"ovn-operator-controller-manager-bf6d4f946-jmxtj\" (UID: \"e1cafba6-81fa-4f70-b79d-4d02cdd194a3\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jmxtj" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.322139 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-g6p5n" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.324726 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-756ccf86c7-6dlgk"] Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.336695 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpp5g\" (UniqueName: \"kubernetes.io/projected/60924e24-00f9-4f6a-bf7e-385f8e54a027-kube-api-access-rpp5g\") pod \"nova-operator-controller-manager-5fbbf8b6cc-snvkz\" (UID: \"60924e24-00f9-4f6a-bf7e-385f8e54a027\") " pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-snvkz" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.336841 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-qw6tr" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.343175 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blbwj\" (UniqueName: \"kubernetes.io/projected/5a07861b-82a4-47c3-8255-3b76b44da9d6-kube-api-access-blbwj\") pod \"octavia-operator-controller-manager-68c649d9d-sffcl\" (UID: \"5a07861b-82a4-47c3-8255-3b76b44da9d6\") " pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-sffcl" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.345262 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-f76f4954c-5chln" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.410861 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-snvkz" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.412962 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-55f78b7c4c-rgxgj"] Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.413875 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-55f78b7c4c-rgxgj" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.419284 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-jj46g" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.422248 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-sffcl" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.422739 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9td22\" (UniqueName: \"kubernetes.io/projected/c1d38621-ff5b-4d92-8457-9568c6b67416-kube-api-access-9td22\") pod \"placement-operator-controller-manager-8665b56d78-c2gjc\" (UID: \"c1d38621-ff5b-4d92-8457-9568c6b67416\") " pod="openstack-operators/placement-operator-controller-manager-8665b56d78-c2gjc" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.422803 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kpx8\" (UniqueName: \"kubernetes.io/projected/3f5c0d61-d8f5-4bfb-87c1-4f795057abd2-kube-api-access-9kpx8\") pod \"test-operator-controller-manager-756ccf86c7-6dlgk\" (UID: \"3f5c0d61-d8f5-4bfb-87c1-4f795057abd2\") " pod="openstack-operators/test-operator-controller-manager-756ccf86c7-6dlgk" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.422834 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3858e881-df69-47eb-8a78-fa48f7ca7f87-cert\") pod \"openstack-baremetal-operator-controller-manager-689f887b54sfqvx\" (UID: \"3858e881-df69-47eb-8a78-fa48f7ca7f87\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-689f887b54sfqvx" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.422858 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h2fl\" (UniqueName: \"kubernetes.io/projected/df77558c-ad92-43a1-9d9a-e3fac782b0e8-kube-api-access-2h2fl\") pod \"telemetry-operator-controller-manager-97d456b9-gqlwk\" (UID: \"df77558c-ad92-43a1-9d9a-e3fac782b0e8\") " pod="openstack-operators/telemetry-operator-controller-manager-97d456b9-gqlwk" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.422907 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhnmw\" (UniqueName: \"kubernetes.io/projected/3858e881-df69-47eb-8a78-fa48f7ca7f87-kube-api-access-nhnmw\") pod \"openstack-baremetal-operator-controller-manager-689f887b54sfqvx\" (UID: \"3858e881-df69-47eb-8a78-fa48f7ca7f87\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-689f887b54sfqvx" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.422953 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4rl2\" (UniqueName: \"kubernetes.io/projected/4e1be8a6-df60-418b-911f-efbf8aa5cf5a-kube-api-access-c4rl2\") pod \"swift-operator-controller-manager-5c6df8f9-tm9tq\" (UID: \"4e1be8a6-df60-418b-911f-efbf8aa5cf5a\") " pod="openstack-operators/swift-operator-controller-manager-5c6df8f9-tm9tq" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.422976 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwnhp\" (UniqueName: \"kubernetes.io/projected/e1cafba6-81fa-4f70-b79d-4d02cdd194a3-kube-api-access-xwnhp\") pod \"ovn-operator-controller-manager-bf6d4f946-jmxtj\" (UID: \"e1cafba6-81fa-4f70-b79d-4d02cdd194a3\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jmxtj" Dec 15 05:49:18 crc kubenswrapper[4747]: E1215 05:49:18.423203 4747 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 15 05:49:18 crc kubenswrapper[4747]: E1215 05:49:18.423268 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3858e881-df69-47eb-8a78-fa48f7ca7f87-cert podName:3858e881-df69-47eb-8a78-fa48f7ca7f87 nodeName:}" failed. No retries permitted until 2025-12-15 05:49:18.923246423 +0000 UTC m=+722.619758340 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3858e881-df69-47eb-8a78-fa48f7ca7f87-cert") pod "openstack-baremetal-operator-controller-manager-689f887b54sfqvx" (UID: "3858e881-df69-47eb-8a78-fa48f7ca7f87") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.423628 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-55f78b7c4c-rgxgj"] Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.425152 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5cf45c46bd-ggkl6" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.446220 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9td22\" (UniqueName: \"kubernetes.io/projected/c1d38621-ff5b-4d92-8457-9568c6b67416-kube-api-access-9td22\") pod \"placement-operator-controller-manager-8665b56d78-c2gjc\" (UID: \"c1d38621-ff5b-4d92-8457-9568c6b67416\") " pod="openstack-operators/placement-operator-controller-manager-8665b56d78-c2gjc" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.449681 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4rl2\" (UniqueName: \"kubernetes.io/projected/4e1be8a6-df60-418b-911f-efbf8aa5cf5a-kube-api-access-c4rl2\") pod \"swift-operator-controller-manager-5c6df8f9-tm9tq\" (UID: \"4e1be8a6-df60-418b-911f-efbf8aa5cf5a\") " pod="openstack-operators/swift-operator-controller-manager-5c6df8f9-tm9tq" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.449847 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwnhp\" (UniqueName: \"kubernetes.io/projected/e1cafba6-81fa-4f70-b79d-4d02cdd194a3-kube-api-access-xwnhp\") pod \"ovn-operator-controller-manager-bf6d4f946-jmxtj\" (UID: \"e1cafba6-81fa-4f70-b79d-4d02cdd194a3\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jmxtj" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.453034 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhnmw\" (UniqueName: \"kubernetes.io/projected/3858e881-df69-47eb-8a78-fa48f7ca7f87-kube-api-access-nhnmw\") pod \"openstack-baremetal-operator-controller-manager-689f887b54sfqvx\" (UID: \"3858e881-df69-47eb-8a78-fa48f7ca7f87\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-689f887b54sfqvx" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.489062 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-95949466-pzsnr"] Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.521074 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jmxtj" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.524173 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6drbg\" (UniqueName: \"kubernetes.io/projected/2e8d5dd7-baa6-49fb-9f9f-735905ac6e61-kube-api-access-6drbg\") pod \"watcher-operator-controller-manager-55f78b7c4c-rgxgj\" (UID: \"2e8d5dd7-baa6-49fb-9f9f-735905ac6e61\") " pod="openstack-operators/watcher-operator-controller-manager-55f78b7c4c-rgxgj" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.524264 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kpx8\" (UniqueName: \"kubernetes.io/projected/3f5c0d61-d8f5-4bfb-87c1-4f795057abd2-kube-api-access-9kpx8\") pod \"test-operator-controller-manager-756ccf86c7-6dlgk\" (UID: \"3f5c0d61-d8f5-4bfb-87c1-4f795057abd2\") " pod="openstack-operators/test-operator-controller-manager-756ccf86c7-6dlgk" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.524312 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h2fl\" (UniqueName: \"kubernetes.io/projected/df77558c-ad92-43a1-9d9a-e3fac782b0e8-kube-api-access-2h2fl\") pod \"telemetry-operator-controller-manager-97d456b9-gqlwk\" (UID: \"df77558c-ad92-43a1-9d9a-e3fac782b0e8\") " pod="openstack-operators/telemetry-operator-controller-manager-97d456b9-gqlwk" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.527418 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-56f6fbdf6-ch5s4"] Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.528293 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-56f6fbdf6-ch5s4" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.536428 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.536629 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-65vv5" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.536785 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.542089 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8665b56d78-c2gjc" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.568646 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5c6df8f9-tm9tq" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.581068 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kpx8\" (UniqueName: \"kubernetes.io/projected/3f5c0d61-d8f5-4bfb-87c1-4f795057abd2-kube-api-access-9kpx8\") pod \"test-operator-controller-manager-756ccf86c7-6dlgk\" (UID: \"3f5c0d61-d8f5-4bfb-87c1-4f795057abd2\") " pod="openstack-operators/test-operator-controller-manager-756ccf86c7-6dlgk" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.590552 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h2fl\" (UniqueName: \"kubernetes.io/projected/df77558c-ad92-43a1-9d9a-e3fac782b0e8-kube-api-access-2h2fl\") pod \"telemetry-operator-controller-manager-97d456b9-gqlwk\" (UID: \"df77558c-ad92-43a1-9d9a-e3fac782b0e8\") " pod="openstack-operators/telemetry-operator-controller-manager-97d456b9-gqlwk" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.591591 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-56f6fbdf6-ch5s4"] Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.608645 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-95949466-pzsnr" event={"ID":"966a3797-97c2-4e8d-8799-6b8a287efd78","Type":"ContainerStarted","Data":"b458fb3d177cfe042b4435de5d8c16ec71650b3c02261c1358679ed8656c34ee"} Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.619743 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66f8b87655-rkqrw"] Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.626683 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6drbg\" (UniqueName: \"kubernetes.io/projected/2e8d5dd7-baa6-49fb-9f9f-735905ac6e61-kube-api-access-6drbg\") pod \"watcher-operator-controller-manager-55f78b7c4c-rgxgj\" (UID: \"2e8d5dd7-baa6-49fb-9f9f-735905ac6e61\") " pod="openstack-operators/watcher-operator-controller-manager-55f78b7c4c-rgxgj" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.626804 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb8f1731-54b2-4d71-96fb-13fde067045b-cert\") pod \"infra-operator-controller-manager-58944d7758-s79wq\" (UID: \"bb8f1731-54b2-4d71-96fb-13fde067045b\") " pod="openstack-operators/infra-operator-controller-manager-58944d7758-s79wq" Dec 15 05:49:18 crc kubenswrapper[4747]: E1215 05:49:18.627037 4747 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 15 05:49:18 crc kubenswrapper[4747]: E1215 05:49:18.627091 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb8f1731-54b2-4d71-96fb-13fde067045b-cert podName:bb8f1731-54b2-4d71-96fb-13fde067045b nodeName:}" failed. No retries permitted until 2025-12-15 05:49:19.627073988 +0000 UTC m=+723.323585905 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bb8f1731-54b2-4d71-96fb-13fde067045b-cert") pod "infra-operator-controller-manager-58944d7758-s79wq" (UID: "bb8f1731-54b2-4d71-96fb-13fde067045b") : secret "infra-operator-webhook-server-cert" not found Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.650325 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6drbg\" (UniqueName: \"kubernetes.io/projected/2e8d5dd7-baa6-49fb-9f9f-735905ac6e61-kube-api-access-6drbg\") pod \"watcher-operator-controller-manager-55f78b7c4c-rgxgj\" (UID: \"2e8d5dd7-baa6-49fb-9f9f-735905ac6e61\") " pod="openstack-operators/watcher-operator-controller-manager-55f78b7c4c-rgxgj" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.653761 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-756ccf86c7-6dlgk" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.673374 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vtm79"] Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.674392 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vtm79"] Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.674473 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vtm79" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.676722 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-gx6wm" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.729429 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e3f1bf4c-044b-49d5-be51-b853e2f6a7b0-metrics-certs\") pod \"openstack-operator-controller-manager-56f6fbdf6-ch5s4\" (UID: \"e3f1bf4c-044b-49d5-be51-b853e2f6a7b0\") " pod="openstack-operators/openstack-operator-controller-manager-56f6fbdf6-ch5s4" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.729495 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e3f1bf4c-044b-49d5-be51-b853e2f6a7b0-webhook-certs\") pod \"openstack-operator-controller-manager-56f6fbdf6-ch5s4\" (UID: \"e3f1bf4c-044b-49d5-be51-b853e2f6a7b0\") " pod="openstack-operators/openstack-operator-controller-manager-56f6fbdf6-ch5s4" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.729613 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7s6j\" (UniqueName: \"kubernetes.io/projected/e3f1bf4c-044b-49d5-be51-b853e2f6a7b0-kube-api-access-b7s6j\") pod \"openstack-operator-controller-manager-56f6fbdf6-ch5s4\" (UID: \"e3f1bf4c-044b-49d5-be51-b853e2f6a7b0\") " pod="openstack-operators/openstack-operator-controller-manager-56f6fbdf6-ch5s4" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.743814 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-55f78b7c4c-rgxgj" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.830134 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-59b8dcb766-tcs4c"] Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.831493 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7s6j\" (UniqueName: \"kubernetes.io/projected/e3f1bf4c-044b-49d5-be51-b853e2f6a7b0-kube-api-access-b7s6j\") pod \"openstack-operator-controller-manager-56f6fbdf6-ch5s4\" (UID: \"e3f1bf4c-044b-49d5-be51-b853e2f6a7b0\") " pod="openstack-operators/openstack-operator-controller-manager-56f6fbdf6-ch5s4" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.831555 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xtth\" (UniqueName: \"kubernetes.io/projected/3818fc80-b8e4-4dc2-9470-587cf10a2350-kube-api-access-7xtth\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vtm79\" (UID: \"3818fc80-b8e4-4dc2-9470-587cf10a2350\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vtm79" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.831631 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e3f1bf4c-044b-49d5-be51-b853e2f6a7b0-metrics-certs\") pod \"openstack-operator-controller-manager-56f6fbdf6-ch5s4\" (UID: \"e3f1bf4c-044b-49d5-be51-b853e2f6a7b0\") " pod="openstack-operators/openstack-operator-controller-manager-56f6fbdf6-ch5s4" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.831676 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e3f1bf4c-044b-49d5-be51-b853e2f6a7b0-webhook-certs\") pod \"openstack-operator-controller-manager-56f6fbdf6-ch5s4\" (UID: \"e3f1bf4c-044b-49d5-be51-b853e2f6a7b0\") " pod="openstack-operators/openstack-operator-controller-manager-56f6fbdf6-ch5s4" Dec 15 05:49:18 crc kubenswrapper[4747]: E1215 05:49:18.831803 4747 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 15 05:49:18 crc kubenswrapper[4747]: E1215 05:49:18.832053 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3f1bf4c-044b-49d5-be51-b853e2f6a7b0-webhook-certs podName:e3f1bf4c-044b-49d5-be51-b853e2f6a7b0 nodeName:}" failed. No retries permitted until 2025-12-15 05:49:19.332031699 +0000 UTC m=+723.028543615 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e3f1bf4c-044b-49d5-be51-b853e2f6a7b0-webhook-certs") pod "openstack-operator-controller-manager-56f6fbdf6-ch5s4" (UID: "e3f1bf4c-044b-49d5-be51-b853e2f6a7b0") : secret "webhook-server-cert" not found Dec 15 05:49:18 crc kubenswrapper[4747]: E1215 05:49:18.832295 4747 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 15 05:49:18 crc kubenswrapper[4747]: E1215 05:49:18.832395 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3f1bf4c-044b-49d5-be51-b853e2f6a7b0-metrics-certs podName:e3f1bf4c-044b-49d5-be51-b853e2f6a7b0 nodeName:}" failed. No retries permitted until 2025-12-15 05:49:19.332362029 +0000 UTC m=+723.028873947 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e3f1bf4c-044b-49d5-be51-b853e2f6a7b0-metrics-certs") pod "openstack-operator-controller-manager-56f6fbdf6-ch5s4" (UID: "e3f1bf4c-044b-49d5-be51-b853e2f6a7b0") : secret "metrics-server-cert" not found Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.838257 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6ccf486b9-cmgcn"] Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.851624 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7s6j\" (UniqueName: \"kubernetes.io/projected/e3f1bf4c-044b-49d5-be51-b853e2f6a7b0-kube-api-access-b7s6j\") pod \"openstack-operator-controller-manager-56f6fbdf6-ch5s4\" (UID: \"e3f1bf4c-044b-49d5-be51-b853e2f6a7b0\") " pod="openstack-operators/openstack-operator-controller-manager-56f6fbdf6-ch5s4" Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.882692 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-97d456b9-gqlwk" Dec 15 05:49:18 crc kubenswrapper[4747]: W1215 05:49:18.888322 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8a35ff2_385b_46d4_95e6_d7e85a7c8477.slice/crio-9da6abf4e29db00bdb29b99f3e6b15e8abd678df673f2c294819a64fe9659478 WatchSource:0}: Error finding container 9da6abf4e29db00bdb29b99f3e6b15e8abd678df673f2c294819a64fe9659478: Status 404 returned error can't find the container with id 9da6abf4e29db00bdb29b99f3e6b15e8abd678df673f2c294819a64fe9659478 Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.934646 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3858e881-df69-47eb-8a78-fa48f7ca7f87-cert\") pod \"openstack-baremetal-operator-controller-manager-689f887b54sfqvx\" (UID: \"3858e881-df69-47eb-8a78-fa48f7ca7f87\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-689f887b54sfqvx" Dec 15 05:49:18 crc kubenswrapper[4747]: E1215 05:49:18.934816 4747 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.934861 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xtth\" (UniqueName: \"kubernetes.io/projected/3818fc80-b8e4-4dc2-9470-587cf10a2350-kube-api-access-7xtth\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vtm79\" (UID: \"3818fc80-b8e4-4dc2-9470-587cf10a2350\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vtm79" Dec 15 05:49:18 crc kubenswrapper[4747]: E1215 05:49:18.934879 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3858e881-df69-47eb-8a78-fa48f7ca7f87-cert podName:3858e881-df69-47eb-8a78-fa48f7ca7f87 nodeName:}" failed. No retries permitted until 2025-12-15 05:49:19.934861338 +0000 UTC m=+723.631373255 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3858e881-df69-47eb-8a78-fa48f7ca7f87-cert") pod "openstack-baremetal-operator-controller-manager-689f887b54sfqvx" (UID: "3858e881-df69-47eb-8a78-fa48f7ca7f87") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 15 05:49:18 crc kubenswrapper[4747]: I1215 05:49:18.952202 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xtth\" (UniqueName: \"kubernetes.io/projected/3818fc80-b8e4-4dc2-9470-587cf10a2350-kube-api-access-7xtth\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vtm79\" (UID: \"3818fc80-b8e4-4dc2-9470-587cf10a2350\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vtm79" Dec 15 05:49:19 crc kubenswrapper[4747]: I1215 05:49:19.008532 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vtm79" Dec 15 05:49:19 crc kubenswrapper[4747]: I1215 05:49:19.081814 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7cd87b778f-qw6tr"] Dec 15 05:49:19 crc kubenswrapper[4747]: I1215 05:49:19.089612 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-767f9d7567-hk9c4"] Dec 15 05:49:19 crc kubenswrapper[4747]: W1215 05:49:19.101525 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f14ea23_34de_4d4b_971d_dc90d34c44a9.slice/crio-77e072abbf4abd8ad1cb15b075effd6484b298a9deaf22f12bc2a7162e0f1567 WatchSource:0}: Error finding container 77e072abbf4abd8ad1cb15b075effd6484b298a9deaf22f12bc2a7162e0f1567: Status 404 returned error can't find the container with id 77e072abbf4abd8ad1cb15b075effd6484b298a9deaf22f12bc2a7162e0f1567 Dec 15 05:49:19 crc kubenswrapper[4747]: W1215 05:49:19.103724 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded7a99f7_83b8_48f4_9cc9_135af2e16529.slice/crio-d1a15fa2bc8f05bc11c2c0e635a48f948890eed3b40ed7525f7553075a9ba5ed WatchSource:0}: Error finding container d1a15fa2bc8f05bc11c2c0e635a48f948890eed3b40ed7525f7553075a9ba5ed: Status 404 returned error can't find the container with id d1a15fa2bc8f05bc11c2c0e635a48f948890eed3b40ed7525f7553075a9ba5ed Dec 15 05:49:19 crc kubenswrapper[4747]: I1215 05:49:19.184187 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-f458558d7-vf58x"] Dec 15 05:49:19 crc kubenswrapper[4747]: W1215 05:49:19.193371 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb93e01ce_98e3_4941_8721_d9ce67414730.slice/crio-8f2d6b3096253da7036d99b764f300b94c6349330fde249a690ce6a93ba7f566 WatchSource:0}: Error finding container 8f2d6b3096253da7036d99b764f300b94c6349330fde249a690ce6a93ba7f566: Status 404 returned error can't find the container with id 8f2d6b3096253da7036d99b764f300b94c6349330fde249a690ce6a93ba7f566 Dec 15 05:49:19 crc kubenswrapper[4747]: I1215 05:49:19.207913 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5fdd9786f7-dg8cj"] Dec 15 05:49:19 crc kubenswrapper[4747]: W1215 05:49:19.212443 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a07861b_82a4_47c3_8255_3b76b44da9d6.slice/crio-2af75897e78631785ec4835ea7284be77a3a4b4d4176975c7bbc379a6c2c8a3a WatchSource:0}: Error finding container 2af75897e78631785ec4835ea7284be77a3a4b4d4176975c7bbc379a6c2c8a3a: Status 404 returned error can't find the container with id 2af75897e78631785ec4835ea7284be77a3a4b4d4176975c7bbc379a6c2c8a3a Dec 15 05:49:19 crc kubenswrapper[4747]: I1215 05:49:19.213029 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-68c649d9d-sffcl"] Dec 15 05:49:19 crc kubenswrapper[4747]: I1215 05:49:19.216427 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-f76f4954c-5chln"] Dec 15 05:49:19 crc kubenswrapper[4747]: I1215 05:49:19.219617 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-5c7cbf548f-v7cjm"] Dec 15 05:49:19 crc kubenswrapper[4747]: I1215 05:49:19.314090 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8665b56d78-c2gjc"] Dec 15 05:49:19 crc kubenswrapper[4747]: I1215 05:49:19.325236 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-snvkz"] Dec 15 05:49:19 crc kubenswrapper[4747]: I1215 05:49:19.332795 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-jmxtj"] Dec 15 05:49:19 crc kubenswrapper[4747]: E1215 05:49:19.339442 4747 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xwnhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-bf6d4f946-jmxtj_openstack-operators(e1cafba6-81fa-4f70-b79d-4d02cdd194a3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 15 05:49:19 crc kubenswrapper[4747]: E1215 05:49:19.341416 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jmxtj" podUID="e1cafba6-81fa-4f70-b79d-4d02cdd194a3" Dec 15 05:49:19 crc kubenswrapper[4747]: I1215 05:49:19.341478 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e3f1bf4c-044b-49d5-be51-b853e2f6a7b0-metrics-certs\") pod \"openstack-operator-controller-manager-56f6fbdf6-ch5s4\" (UID: \"e3f1bf4c-044b-49d5-be51-b853e2f6a7b0\") " pod="openstack-operators/openstack-operator-controller-manager-56f6fbdf6-ch5s4" Dec 15 05:49:19 crc kubenswrapper[4747]: I1215 05:49:19.341529 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e3f1bf4c-044b-49d5-be51-b853e2f6a7b0-webhook-certs\") pod \"openstack-operator-controller-manager-56f6fbdf6-ch5s4\" (UID: \"e3f1bf4c-044b-49d5-be51-b853e2f6a7b0\") " pod="openstack-operators/openstack-operator-controller-manager-56f6fbdf6-ch5s4" Dec 15 05:49:19 crc kubenswrapper[4747]: E1215 05:49:19.341627 4747 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 15 05:49:19 crc kubenswrapper[4747]: E1215 05:49:19.341689 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3f1bf4c-044b-49d5-be51-b853e2f6a7b0-metrics-certs podName:e3f1bf4c-044b-49d5-be51-b853e2f6a7b0 nodeName:}" failed. No retries permitted until 2025-12-15 05:49:20.341674254 +0000 UTC m=+724.038186171 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e3f1bf4c-044b-49d5-be51-b853e2f6a7b0-metrics-certs") pod "openstack-operator-controller-manager-56f6fbdf6-ch5s4" (UID: "e3f1bf4c-044b-49d5-be51-b853e2f6a7b0") : secret "metrics-server-cert" not found Dec 15 05:49:19 crc kubenswrapper[4747]: I1215 05:49:19.341719 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5cf45c46bd-ggkl6"] Dec 15 05:49:19 crc kubenswrapper[4747]: E1215 05:49:19.341790 4747 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 15 05:49:19 crc kubenswrapper[4747]: E1215 05:49:19.341845 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3f1bf4c-044b-49d5-be51-b853e2f6a7b0-webhook-certs podName:e3f1bf4c-044b-49d5-be51-b853e2f6a7b0 nodeName:}" failed. No retries permitted until 2025-12-15 05:49:20.341828935 +0000 UTC m=+724.038340853 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e3f1bf4c-044b-49d5-be51-b853e2f6a7b0-webhook-certs") pod "openstack-operator-controller-manager-56f6fbdf6-ch5s4" (UID: "e3f1bf4c-044b-49d5-be51-b853e2f6a7b0") : secret "webhook-server-cert" not found Dec 15 05:49:19 crc kubenswrapper[4747]: I1215 05:49:19.346213 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vtm79"] Dec 15 05:49:19 crc kubenswrapper[4747]: W1215 05:49:19.347148 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3818fc80_b8e4_4dc2_9470_587cf10a2350.slice/crio-3cfd872791b0502f1ef1be742ce92988ff6630f701f887c4bec32c7da0791711 WatchSource:0}: Error finding container 3cfd872791b0502f1ef1be742ce92988ff6630f701f887c4bec32c7da0791711: Status 404 returned error can't find the container with id 3cfd872791b0502f1ef1be742ce92988ff6630f701f887c4bec32c7da0791711 Dec 15 05:49:19 crc kubenswrapper[4747]: W1215 05:49:19.349846 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50d161a9_2162_4642_bfd4_74bde1129134.slice/crio-6624a3d23aa840a5000bdc634132a76ce7a781c9aedc13216538f5a767ce6007 WatchSource:0}: Error finding container 6624a3d23aa840a5000bdc634132a76ce7a781c9aedc13216538f5a767ce6007: Status 404 returned error can't find the container with id 6624a3d23aa840a5000bdc634132a76ce7a781c9aedc13216538f5a767ce6007 Dec 15 05:49:19 crc kubenswrapper[4747]: E1215 05:49:19.349918 4747 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7xtth,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-vtm79_openstack-operators(3818fc80-b8e4-4dc2-9470-587cf10a2350): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 15 05:49:19 crc kubenswrapper[4747]: E1215 05:49:19.351141 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vtm79" podUID="3818fc80-b8e4-4dc2-9470-587cf10a2350" Dec 15 05:49:19 crc kubenswrapper[4747]: E1215 05:49:19.351650 4747 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:981b6a8f95934a86c5f10ef6e198b07265aeba7f11cf84b9ccd13dfaf06f3ca3,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5mfv9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-5cf45c46bd-ggkl6_openstack-operators(50d161a9-2162-4642-bfd4-74bde1129134): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 15 05:49:19 crc kubenswrapper[4747]: E1215 05:49:19.353035 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/cinder-operator-controller-manager-5cf45c46bd-ggkl6" podUID="50d161a9-2162-4642-bfd4-74bde1129134" Dec 15 05:49:19 crc kubenswrapper[4747]: I1215 05:49:19.448412 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-756ccf86c7-6dlgk"] Dec 15 05:49:19 crc kubenswrapper[4747]: I1215 05:49:19.455963 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5c6df8f9-tm9tq"] Dec 15 05:49:19 crc kubenswrapper[4747]: I1215 05:49:19.459176 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-55f78b7c4c-rgxgj"] Dec 15 05:49:19 crc kubenswrapper[4747]: W1215 05:49:19.488623 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e1be8a6_df60_418b_911f_efbf8aa5cf5a.slice/crio-9eec044f2eeac57329f1cfa06cf462d4f0585c5022f7e717b80727112c41ad93 WatchSource:0}: Error finding container 9eec044f2eeac57329f1cfa06cf462d4f0585c5022f7e717b80727112c41ad93: Status 404 returned error can't find the container with id 9eec044f2eeac57329f1cfa06cf462d4f0585c5022f7e717b80727112c41ad93 Dec 15 05:49:19 crc kubenswrapper[4747]: E1215 05:49:19.489445 4747 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:961417d59f527d925ac48ff6a11de747d0493315e496e34dc83d76a1a1fff58a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6drbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-55f78b7c4c-rgxgj_openstack-operators(2e8d5dd7-baa6-49fb-9f9f-735905ac6e61): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 15 05:49:19 crc kubenswrapper[4747]: E1215 05:49:19.491009 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-55f78b7c4c-rgxgj" podUID="2e8d5dd7-baa6-49fb-9f9f-735905ac6e61" Dec 15 05:49:19 crc kubenswrapper[4747]: E1215 05:49:19.497232 4747 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3aa109bb973253ae9dcf339b9b65abbd1176cdb4be672c93e538a5f113816991,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c4rl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5c6df8f9-tm9tq_openstack-operators(4e1be8a6-df60-418b-911f-efbf8aa5cf5a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 15 05:49:19 crc kubenswrapper[4747]: E1215 05:49:19.498999 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-5c6df8f9-tm9tq" podUID="4e1be8a6-df60-418b-911f-efbf8aa5cf5a" Dec 15 05:49:19 crc kubenswrapper[4747]: I1215 05:49:19.574271 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-97d456b9-gqlwk"] Dec 15 05:49:19 crc kubenswrapper[4747]: E1215 05:49:19.588271 4747 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:f27e732ec1faee765461bf137d9be81278b2fa39675019a73622755e1e610b6f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2h2fl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-97d456b9-gqlwk_openstack-operators(df77558c-ad92-43a1-9d9a-e3fac782b0e8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 15 05:49:19 crc kubenswrapper[4747]: E1215 05:49:19.591087 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-97d456b9-gqlwk" podUID="df77558c-ad92-43a1-9d9a-e3fac782b0e8" Dec 15 05:49:19 crc kubenswrapper[4747]: I1215 05:49:19.617739 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5fdd9786f7-dg8cj" event={"ID":"e6558c12-d59f-4593-9605-a7dc6c19e766","Type":"ContainerStarted","Data":"d1059ce1e2fd41c82841166e74f25306e6e728f50db3012047f9e43883318c43"} Dec 15 05:49:19 crc kubenswrapper[4747]: I1215 05:49:19.619252 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-qw6tr" event={"ID":"5f14ea23-34de-4d4b-971d-dc90d34c44a9","Type":"ContainerStarted","Data":"77e072abbf4abd8ad1cb15b075effd6484b298a9deaf22f12bc2a7162e0f1567"} Dec 15 05:49:19 crc kubenswrapper[4747]: I1215 05:49:19.620232 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5c6df8f9-tm9tq" event={"ID":"4e1be8a6-df60-418b-911f-efbf8aa5cf5a","Type":"ContainerStarted","Data":"9eec044f2eeac57329f1cfa06cf462d4f0585c5022f7e717b80727112c41ad93"} Dec 15 05:49:19 crc kubenswrapper[4747]: E1215 05:49:19.621974 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3aa109bb973253ae9dcf339b9b65abbd1176cdb4be672c93e538a5f113816991\\\"\"" pod="openstack-operators/swift-operator-controller-manager-5c6df8f9-tm9tq" podUID="4e1be8a6-df60-418b-911f-efbf8aa5cf5a" Dec 15 05:49:19 crc kubenswrapper[4747]: I1215 05:49:19.623785 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-sffcl" event={"ID":"5a07861b-82a4-47c3-8255-3b76b44da9d6","Type":"ContainerStarted","Data":"2af75897e78631785ec4835ea7284be77a3a4b4d4176975c7bbc379a6c2c8a3a"} Dec 15 05:49:19 crc kubenswrapper[4747]: I1215 05:49:19.625371 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-59b8dcb766-tcs4c" event={"ID":"07926291-631c-415d-8aaa-c425852decd9","Type":"ContainerStarted","Data":"88045fa06ee72e39b3a284beeb2e57d591cfcb61f9b4a8fe9d20dc8fec4d6ab8"} Dec 15 05:49:19 crc kubenswrapper[4747]: I1215 05:49:19.627086 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-756ccf86c7-6dlgk" event={"ID":"3f5c0d61-d8f5-4bfb-87c1-4f795057abd2","Type":"ContainerStarted","Data":"c411babe31a9866f79cc028165f7a8cbbc0c3bae9914e76a2b1947a7d3b661cf"} Dec 15 05:49:19 crc kubenswrapper[4747]: I1215 05:49:19.629290 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5cf45c46bd-ggkl6" event={"ID":"50d161a9-2162-4642-bfd4-74bde1129134","Type":"ContainerStarted","Data":"6624a3d23aa840a5000bdc634132a76ce7a781c9aedc13216538f5a767ce6007"} Dec 15 05:49:19 crc kubenswrapper[4747]: I1215 05:49:19.631072 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6ccf486b9-cmgcn" event={"ID":"c8a35ff2-385b-46d4-95e6-d7e85a7c8477","Type":"ContainerStarted","Data":"9da6abf4e29db00bdb29b99f3e6b15e8abd678df673f2c294819a64fe9659478"} Dec 15 05:49:19 crc kubenswrapper[4747]: E1215 05:49:19.631273 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:981b6a8f95934a86c5f10ef6e198b07265aeba7f11cf84b9ccd13dfaf06f3ca3\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-5cf45c46bd-ggkl6" podUID="50d161a9-2162-4642-bfd4-74bde1129134" Dec 15 05:49:19 crc kubenswrapper[4747]: I1215 05:49:19.632455 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-snvkz" event={"ID":"60924e24-00f9-4f6a-bf7e-385f8e54a027","Type":"ContainerStarted","Data":"937ae4f55dc88792412e27e3ee3e1fd3eead7ae4fed80dd3cca7e6e82b33ab43"} Dec 15 05:49:19 crc kubenswrapper[4747]: I1215 05:49:19.638810 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-5c7cbf548f-v7cjm" event={"ID":"fdda9bcd-0316-4549-af8b-ae0e151e59d7","Type":"ContainerStarted","Data":"8bba1d7cdd4a26e64f294459a2c7b37cc954fdee7f5ea0fdea2b161035802921"} Dec 15 05:49:19 crc kubenswrapper[4747]: I1215 05:49:19.645290 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8665b56d78-c2gjc" event={"ID":"c1d38621-ff5b-4d92-8457-9568c6b67416","Type":"ContainerStarted","Data":"f72f46e2f9b0bceb79cd991c7cb6c9048b3849cdcd8c24ab3a0749d4b3614561"} Dec 15 05:49:19 crc kubenswrapper[4747]: I1215 05:49:19.646695 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-97d456b9-gqlwk" event={"ID":"df77558c-ad92-43a1-9d9a-e3fac782b0e8","Type":"ContainerStarted","Data":"df88b9c1a2210c6400d97e6038493fea017751bd1ab3fe2156320c783f48683a"} Dec 15 05:49:19 crc kubenswrapper[4747]: E1215 05:49:19.647985 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f27e732ec1faee765461bf137d9be81278b2fa39675019a73622755e1e610b6f\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-97d456b9-gqlwk" podUID="df77558c-ad92-43a1-9d9a-e3fac782b0e8" Dec 15 05:49:19 crc kubenswrapper[4747]: I1215 05:49:19.649503 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-f458558d7-vf58x" event={"ID":"b93e01ce-98e3-4941-8721-d9ce67414730","Type":"ContainerStarted","Data":"8f2d6b3096253da7036d99b764f300b94c6349330fde249a690ce6a93ba7f566"} Dec 15 05:49:19 crc kubenswrapper[4747]: I1215 05:49:19.650586 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-f76f4954c-5chln" event={"ID":"dc8104ce-563e-4e6f-b61d-18e2bdc49879","Type":"ContainerStarted","Data":"dac2a662b7f3e814cf1227d0b8f29877b9eb62a23bd6d684ab37bf0d7211f5f8"} Dec 15 05:49:19 crc kubenswrapper[4747]: I1215 05:49:19.651472 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb8f1731-54b2-4d71-96fb-13fde067045b-cert\") pod \"infra-operator-controller-manager-58944d7758-s79wq\" (UID: \"bb8f1731-54b2-4d71-96fb-13fde067045b\") " pod="openstack-operators/infra-operator-controller-manager-58944d7758-s79wq" Dec 15 05:49:19 crc kubenswrapper[4747]: E1215 05:49:19.651579 4747 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 15 05:49:19 crc kubenswrapper[4747]: E1215 05:49:19.651638 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb8f1731-54b2-4d71-96fb-13fde067045b-cert podName:bb8f1731-54b2-4d71-96fb-13fde067045b nodeName:}" failed. No retries permitted until 2025-12-15 05:49:21.651619952 +0000 UTC m=+725.348131870 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bb8f1731-54b2-4d71-96fb-13fde067045b-cert") pod "infra-operator-controller-manager-58944d7758-s79wq" (UID: "bb8f1731-54b2-4d71-96fb-13fde067045b") : secret "infra-operator-webhook-server-cert" not found Dec 15 05:49:19 crc kubenswrapper[4747]: I1215 05:49:19.654830 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-rkqrw" event={"ID":"a9d4c90d-ecd6-4126-8d91-dfb784a64d54","Type":"ContainerStarted","Data":"83aac9ff84e9316a449ac08ab983ecd8fc4ef4a8e5b7c1c085553c350fe35be8"} Dec 15 05:49:19 crc kubenswrapper[4747]: I1215 05:49:19.655632 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-55f78b7c4c-rgxgj" event={"ID":"2e8d5dd7-baa6-49fb-9f9f-735905ac6e61","Type":"ContainerStarted","Data":"270dbad651753d4e1608fb08601abac6f08a30630e0e2b4b6f71a18f58c5d22b"} Dec 15 05:49:19 crc kubenswrapper[4747]: E1215 05:49:19.660570 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:961417d59f527d925ac48ff6a11de747d0493315e496e34dc83d76a1a1fff58a\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-55f78b7c4c-rgxgj" podUID="2e8d5dd7-baa6-49fb-9f9f-735905ac6e61" Dec 15 05:49:19 crc kubenswrapper[4747]: I1215 05:49:19.661816 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vtm79" event={"ID":"3818fc80-b8e4-4dc2-9470-587cf10a2350","Type":"ContainerStarted","Data":"3cfd872791b0502f1ef1be742ce92988ff6630f701f887c4bec32c7da0791711"} Dec 15 05:49:19 crc kubenswrapper[4747]: E1215 05:49:19.663122 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vtm79" podUID="3818fc80-b8e4-4dc2-9470-587cf10a2350" Dec 15 05:49:19 crc kubenswrapper[4747]: I1215 05:49:19.663665 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-767f9d7567-hk9c4" event={"ID":"ed7a99f7-83b8-48f4-9cc9-135af2e16529","Type":"ContainerStarted","Data":"d1a15fa2bc8f05bc11c2c0e635a48f948890eed3b40ed7525f7553075a9ba5ed"} Dec 15 05:49:19 crc kubenswrapper[4747]: I1215 05:49:19.664355 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jmxtj" event={"ID":"e1cafba6-81fa-4f70-b79d-4d02cdd194a3","Type":"ContainerStarted","Data":"918a87474669002de44cea513d9d64c7607480b18c6d1dcb4902b1db4d61cb0a"} Dec 15 05:49:19 crc kubenswrapper[4747]: E1215 05:49:19.665345 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jmxtj" podUID="e1cafba6-81fa-4f70-b79d-4d02cdd194a3" Dec 15 05:49:19 crc kubenswrapper[4747]: I1215 05:49:19.955800 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3858e881-df69-47eb-8a78-fa48f7ca7f87-cert\") pod \"openstack-baremetal-operator-controller-manager-689f887b54sfqvx\" (UID: \"3858e881-df69-47eb-8a78-fa48f7ca7f87\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-689f887b54sfqvx" Dec 15 05:49:19 crc kubenswrapper[4747]: E1215 05:49:19.956012 4747 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 15 05:49:19 crc kubenswrapper[4747]: E1215 05:49:19.956086 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3858e881-df69-47eb-8a78-fa48f7ca7f87-cert podName:3858e881-df69-47eb-8a78-fa48f7ca7f87 nodeName:}" failed. No retries permitted until 2025-12-15 05:49:21.956067662 +0000 UTC m=+725.652579580 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3858e881-df69-47eb-8a78-fa48f7ca7f87-cert") pod "openstack-baremetal-operator-controller-manager-689f887b54sfqvx" (UID: "3858e881-df69-47eb-8a78-fa48f7ca7f87") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 15 05:49:20 crc kubenswrapper[4747]: I1215 05:49:20.363503 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e3f1bf4c-044b-49d5-be51-b853e2f6a7b0-metrics-certs\") pod \"openstack-operator-controller-manager-56f6fbdf6-ch5s4\" (UID: \"e3f1bf4c-044b-49d5-be51-b853e2f6a7b0\") " pod="openstack-operators/openstack-operator-controller-manager-56f6fbdf6-ch5s4" Dec 15 05:49:20 crc kubenswrapper[4747]: I1215 05:49:20.363867 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e3f1bf4c-044b-49d5-be51-b853e2f6a7b0-webhook-certs\") pod \"openstack-operator-controller-manager-56f6fbdf6-ch5s4\" (UID: \"e3f1bf4c-044b-49d5-be51-b853e2f6a7b0\") " pod="openstack-operators/openstack-operator-controller-manager-56f6fbdf6-ch5s4" Dec 15 05:49:20 crc kubenswrapper[4747]: E1215 05:49:20.363735 4747 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 15 05:49:20 crc kubenswrapper[4747]: E1215 05:49:20.364220 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3f1bf4c-044b-49d5-be51-b853e2f6a7b0-metrics-certs podName:e3f1bf4c-044b-49d5-be51-b853e2f6a7b0 nodeName:}" failed. No retries permitted until 2025-12-15 05:49:22.364202154 +0000 UTC m=+726.060714071 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e3f1bf4c-044b-49d5-be51-b853e2f6a7b0-metrics-certs") pod "openstack-operator-controller-manager-56f6fbdf6-ch5s4" (UID: "e3f1bf4c-044b-49d5-be51-b853e2f6a7b0") : secret "metrics-server-cert" not found Dec 15 05:49:20 crc kubenswrapper[4747]: E1215 05:49:20.364132 4747 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 15 05:49:20 crc kubenswrapper[4747]: E1215 05:49:20.364700 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3f1bf4c-044b-49d5-be51-b853e2f6a7b0-webhook-certs podName:e3f1bf4c-044b-49d5-be51-b853e2f6a7b0 nodeName:}" failed. No retries permitted until 2025-12-15 05:49:22.364690221 +0000 UTC m=+726.061202139 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e3f1bf4c-044b-49d5-be51-b853e2f6a7b0-webhook-certs") pod "openstack-operator-controller-manager-56f6fbdf6-ch5s4" (UID: "e3f1bf4c-044b-49d5-be51-b853e2f6a7b0") : secret "webhook-server-cert" not found Dec 15 05:49:20 crc kubenswrapper[4747]: E1215 05:49:20.695207 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vtm79" podUID="3818fc80-b8e4-4dc2-9470-587cf10a2350" Dec 15 05:49:20 crc kubenswrapper[4747]: E1215 05:49:20.695248 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:981b6a8f95934a86c5f10ef6e198b07265aeba7f11cf84b9ccd13dfaf06f3ca3\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-5cf45c46bd-ggkl6" podUID="50d161a9-2162-4642-bfd4-74bde1129134" Dec 15 05:49:20 crc kubenswrapper[4747]: E1215 05:49:20.695329 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3aa109bb973253ae9dcf339b9b65abbd1176cdb4be672c93e538a5f113816991\\\"\"" pod="openstack-operators/swift-operator-controller-manager-5c6df8f9-tm9tq" podUID="4e1be8a6-df60-418b-911f-efbf8aa5cf5a" Dec 15 05:49:20 crc kubenswrapper[4747]: E1215 05:49:20.695315 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jmxtj" podUID="e1cafba6-81fa-4f70-b79d-4d02cdd194a3" Dec 15 05:49:20 crc kubenswrapper[4747]: E1215 05:49:20.695375 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:961417d59f527d925ac48ff6a11de747d0493315e496e34dc83d76a1a1fff58a\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-55f78b7c4c-rgxgj" podUID="2e8d5dd7-baa6-49fb-9f9f-735905ac6e61" Dec 15 05:49:20 crc kubenswrapper[4747]: E1215 05:49:20.695425 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f27e732ec1faee765461bf137d9be81278b2fa39675019a73622755e1e610b6f\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-97d456b9-gqlwk" podUID="df77558c-ad92-43a1-9d9a-e3fac782b0e8" Dec 15 05:49:21 crc kubenswrapper[4747]: I1215 05:49:21.705846 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb8f1731-54b2-4d71-96fb-13fde067045b-cert\") pod \"infra-operator-controller-manager-58944d7758-s79wq\" (UID: \"bb8f1731-54b2-4d71-96fb-13fde067045b\") " pod="openstack-operators/infra-operator-controller-manager-58944d7758-s79wq" Dec 15 05:49:21 crc kubenswrapper[4747]: E1215 05:49:21.706206 4747 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 15 05:49:21 crc kubenswrapper[4747]: E1215 05:49:21.706277 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb8f1731-54b2-4d71-96fb-13fde067045b-cert podName:bb8f1731-54b2-4d71-96fb-13fde067045b nodeName:}" failed. No retries permitted until 2025-12-15 05:49:25.706257341 +0000 UTC m=+729.402769258 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bb8f1731-54b2-4d71-96fb-13fde067045b-cert") pod "infra-operator-controller-manager-58944d7758-s79wq" (UID: "bb8f1731-54b2-4d71-96fb-13fde067045b") : secret "infra-operator-webhook-server-cert" not found Dec 15 05:49:22 crc kubenswrapper[4747]: I1215 05:49:22.010495 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3858e881-df69-47eb-8a78-fa48f7ca7f87-cert\") pod \"openstack-baremetal-operator-controller-manager-689f887b54sfqvx\" (UID: \"3858e881-df69-47eb-8a78-fa48f7ca7f87\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-689f887b54sfqvx" Dec 15 05:49:22 crc kubenswrapper[4747]: E1215 05:49:22.010649 4747 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 15 05:49:22 crc kubenswrapper[4747]: E1215 05:49:22.010945 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3858e881-df69-47eb-8a78-fa48f7ca7f87-cert podName:3858e881-df69-47eb-8a78-fa48f7ca7f87 nodeName:}" failed. No retries permitted until 2025-12-15 05:49:26.010911539 +0000 UTC m=+729.707423476 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3858e881-df69-47eb-8a78-fa48f7ca7f87-cert") pod "openstack-baremetal-operator-controller-manager-689f887b54sfqvx" (UID: "3858e881-df69-47eb-8a78-fa48f7ca7f87") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 15 05:49:22 crc kubenswrapper[4747]: I1215 05:49:22.416511 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e3f1bf4c-044b-49d5-be51-b853e2f6a7b0-metrics-certs\") pod \"openstack-operator-controller-manager-56f6fbdf6-ch5s4\" (UID: \"e3f1bf4c-044b-49d5-be51-b853e2f6a7b0\") " pod="openstack-operators/openstack-operator-controller-manager-56f6fbdf6-ch5s4" Dec 15 05:49:22 crc kubenswrapper[4747]: I1215 05:49:22.416581 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e3f1bf4c-044b-49d5-be51-b853e2f6a7b0-webhook-certs\") pod \"openstack-operator-controller-manager-56f6fbdf6-ch5s4\" (UID: \"e3f1bf4c-044b-49d5-be51-b853e2f6a7b0\") " pod="openstack-operators/openstack-operator-controller-manager-56f6fbdf6-ch5s4" Dec 15 05:49:22 crc kubenswrapper[4747]: E1215 05:49:22.416723 4747 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 15 05:49:22 crc kubenswrapper[4747]: E1215 05:49:22.416819 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3f1bf4c-044b-49d5-be51-b853e2f6a7b0-metrics-certs podName:e3f1bf4c-044b-49d5-be51-b853e2f6a7b0 nodeName:}" failed. No retries permitted until 2025-12-15 05:49:26.416797341 +0000 UTC m=+730.113309258 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e3f1bf4c-044b-49d5-be51-b853e2f6a7b0-metrics-certs") pod "openstack-operator-controller-manager-56f6fbdf6-ch5s4" (UID: "e3f1bf4c-044b-49d5-be51-b853e2f6a7b0") : secret "metrics-server-cert" not found Dec 15 05:49:22 crc kubenswrapper[4747]: E1215 05:49:22.416831 4747 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 15 05:49:22 crc kubenswrapper[4747]: E1215 05:49:22.416945 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3f1bf4c-044b-49d5-be51-b853e2f6a7b0-webhook-certs podName:e3f1bf4c-044b-49d5-be51-b853e2f6a7b0 nodeName:}" failed. No retries permitted until 2025-12-15 05:49:26.416906707 +0000 UTC m=+730.113418624 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e3f1bf4c-044b-49d5-be51-b853e2f6a7b0-webhook-certs") pod "openstack-operator-controller-manager-56f6fbdf6-ch5s4" (UID: "e3f1bf4c-044b-49d5-be51-b853e2f6a7b0") : secret "webhook-server-cert" not found Dec 15 05:49:25 crc kubenswrapper[4747]: I1215 05:49:25.767082 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb8f1731-54b2-4d71-96fb-13fde067045b-cert\") pod \"infra-operator-controller-manager-58944d7758-s79wq\" (UID: \"bb8f1731-54b2-4d71-96fb-13fde067045b\") " pod="openstack-operators/infra-operator-controller-manager-58944d7758-s79wq" Dec 15 05:49:25 crc kubenswrapper[4747]: E1215 05:49:25.767722 4747 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 15 05:49:25 crc kubenswrapper[4747]: E1215 05:49:25.767785 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb8f1731-54b2-4d71-96fb-13fde067045b-cert podName:bb8f1731-54b2-4d71-96fb-13fde067045b nodeName:}" failed. No retries permitted until 2025-12-15 05:49:33.767764109 +0000 UTC m=+737.464276026 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bb8f1731-54b2-4d71-96fb-13fde067045b-cert") pod "infra-operator-controller-manager-58944d7758-s79wq" (UID: "bb8f1731-54b2-4d71-96fb-13fde067045b") : secret "infra-operator-webhook-server-cert" not found Dec 15 05:49:26 crc kubenswrapper[4747]: I1215 05:49:26.073310 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3858e881-df69-47eb-8a78-fa48f7ca7f87-cert\") pod \"openstack-baremetal-operator-controller-manager-689f887b54sfqvx\" (UID: \"3858e881-df69-47eb-8a78-fa48f7ca7f87\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-689f887b54sfqvx" Dec 15 05:49:26 crc kubenswrapper[4747]: E1215 05:49:26.073554 4747 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 15 05:49:26 crc kubenswrapper[4747]: E1215 05:49:26.073658 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3858e881-df69-47eb-8a78-fa48f7ca7f87-cert podName:3858e881-df69-47eb-8a78-fa48f7ca7f87 nodeName:}" failed. No retries permitted until 2025-12-15 05:49:34.073636257 +0000 UTC m=+737.770148175 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3858e881-df69-47eb-8a78-fa48f7ca7f87-cert") pod "openstack-baremetal-operator-controller-manager-689f887b54sfqvx" (UID: "3858e881-df69-47eb-8a78-fa48f7ca7f87") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 15 05:49:26 crc kubenswrapper[4747]: I1215 05:49:26.483493 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e3f1bf4c-044b-49d5-be51-b853e2f6a7b0-metrics-certs\") pod \"openstack-operator-controller-manager-56f6fbdf6-ch5s4\" (UID: \"e3f1bf4c-044b-49d5-be51-b853e2f6a7b0\") " pod="openstack-operators/openstack-operator-controller-manager-56f6fbdf6-ch5s4" Dec 15 05:49:26 crc kubenswrapper[4747]: I1215 05:49:26.483623 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e3f1bf4c-044b-49d5-be51-b853e2f6a7b0-webhook-certs\") pod \"openstack-operator-controller-manager-56f6fbdf6-ch5s4\" (UID: \"e3f1bf4c-044b-49d5-be51-b853e2f6a7b0\") " pod="openstack-operators/openstack-operator-controller-manager-56f6fbdf6-ch5s4" Dec 15 05:49:26 crc kubenswrapper[4747]: E1215 05:49:26.483832 4747 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 15 05:49:26 crc kubenswrapper[4747]: E1215 05:49:26.483894 4747 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 15 05:49:26 crc kubenswrapper[4747]: E1215 05:49:26.483968 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3f1bf4c-044b-49d5-be51-b853e2f6a7b0-metrics-certs podName:e3f1bf4c-044b-49d5-be51-b853e2f6a7b0 nodeName:}" failed. No retries permitted until 2025-12-15 05:49:34.483919879 +0000 UTC m=+738.180431797 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e3f1bf4c-044b-49d5-be51-b853e2f6a7b0-metrics-certs") pod "openstack-operator-controller-manager-56f6fbdf6-ch5s4" (UID: "e3f1bf4c-044b-49d5-be51-b853e2f6a7b0") : secret "metrics-server-cert" not found Dec 15 05:49:26 crc kubenswrapper[4747]: E1215 05:49:26.484000 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3f1bf4c-044b-49d5-be51-b853e2f6a7b0-webhook-certs podName:e3f1bf4c-044b-49d5-be51-b853e2f6a7b0 nodeName:}" failed. No retries permitted until 2025-12-15 05:49:34.483989721 +0000 UTC m=+738.180501638 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e3f1bf4c-044b-49d5-be51-b853e2f6a7b0-webhook-certs") pod "openstack-operator-controller-manager-56f6fbdf6-ch5s4" (UID: "e3f1bf4c-044b-49d5-be51-b853e2f6a7b0") : secret "webhook-server-cert" not found Dec 15 05:49:32 crc kubenswrapper[4747]: I1215 05:49:32.788580 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-756ccf86c7-6dlgk" event={"ID":"3f5c0d61-d8f5-4bfb-87c1-4f795057abd2","Type":"ContainerStarted","Data":"3f8d8c03986ce147189e7a4e5ca207adfae4d0177bc452121e89446019d0d47e"} Dec 15 05:49:32 crc kubenswrapper[4747]: I1215 05:49:32.789172 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-756ccf86c7-6dlgk" Dec 15 05:49:32 crc kubenswrapper[4747]: I1215 05:49:32.795572 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8665b56d78-c2gjc" event={"ID":"c1d38621-ff5b-4d92-8457-9568c6b67416","Type":"ContainerStarted","Data":"b8c49026f51323206525c1cc266f4ad254e70630392d38e7f7183919de936674"} Dec 15 05:49:32 crc kubenswrapper[4747]: I1215 05:49:32.795703 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8665b56d78-c2gjc" Dec 15 05:49:32 crc kubenswrapper[4747]: I1215 05:49:32.796796 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-767f9d7567-hk9c4" event={"ID":"ed7a99f7-83b8-48f4-9cc9-135af2e16529","Type":"ContainerStarted","Data":"3438d9f5454ebe42527d5b49e7baf14ea382623f72b950fcc02d4e3b87190e7e"} Dec 15 05:49:32 crc kubenswrapper[4747]: I1215 05:49:32.796948 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-767f9d7567-hk9c4" Dec 15 05:49:32 crc kubenswrapper[4747]: I1215 05:49:32.797904 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6ccf486b9-cmgcn" event={"ID":"c8a35ff2-385b-46d4-95e6-d7e85a7c8477","Type":"ContainerStarted","Data":"b8dd2fe8225badb56272a6c5ee7927ec1d474dfa34cffc591f1b93a4eb0ceca3"} Dec 15 05:49:32 crc kubenswrapper[4747]: I1215 05:49:32.798379 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-6ccf486b9-cmgcn" Dec 15 05:49:32 crc kubenswrapper[4747]: I1215 05:49:32.800106 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-snvkz" event={"ID":"60924e24-00f9-4f6a-bf7e-385f8e54a027","Type":"ContainerStarted","Data":"5ed2c042d5b707d1fc3dc7cb3d3eb0c66630b7b421d9aabcac3406c261925a7b"} Dec 15 05:49:32 crc kubenswrapper[4747]: I1215 05:49:32.800451 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-snvkz" Dec 15 05:49:32 crc kubenswrapper[4747]: I1215 05:49:32.801898 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-f458558d7-vf58x" event={"ID":"b93e01ce-98e3-4941-8721-d9ce67414730","Type":"ContainerStarted","Data":"477b2235e92eac5c8b174d2303a38fa77d58e4ad4df09666219fac3888b42de7"} Dec 15 05:49:32 crc kubenswrapper[4747]: I1215 05:49:32.802242 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-f458558d7-vf58x" Dec 15 05:49:32 crc kubenswrapper[4747]: I1215 05:49:32.807215 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-f76f4954c-5chln" event={"ID":"dc8104ce-563e-4e6f-b61d-18e2bdc49879","Type":"ContainerStarted","Data":"7e54b8a91e123f98afc467a9351042edd8abff386ae21a567e58198b2b303574"} Dec 15 05:49:32 crc kubenswrapper[4747]: I1215 05:49:32.807553 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-f76f4954c-5chln" Dec 15 05:49:32 crc kubenswrapper[4747]: I1215 05:49:32.809739 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-756ccf86c7-6dlgk" podStartSLOduration=2.4661314 podStartE2EDuration="14.809729679s" podCreationTimestamp="2025-12-15 05:49:18 +0000 UTC" firstStartedPulling="2025-12-15 05:49:19.488723458 +0000 UTC m=+723.185235375" lastFinishedPulling="2025-12-15 05:49:31.832321747 +0000 UTC m=+735.528833654" observedRunningTime="2025-12-15 05:49:32.806024281 +0000 UTC m=+736.502536197" watchObservedRunningTime="2025-12-15 05:49:32.809729679 +0000 UTC m=+736.506241596" Dec 15 05:49:32 crc kubenswrapper[4747]: I1215 05:49:32.820671 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-5c7cbf548f-v7cjm" event={"ID":"fdda9bcd-0316-4549-af8b-ae0e151e59d7","Type":"ContainerStarted","Data":"5fb4ddda922d5b8bb3ff03d87c7f756e450f191158026ce50a561304a778ac6a"} Dec 15 05:49:32 crc kubenswrapper[4747]: I1215 05:49:32.820771 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-5c7cbf548f-v7cjm" Dec 15 05:49:32 crc kubenswrapper[4747]: I1215 05:49:32.822853 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5fdd9786f7-dg8cj" event={"ID":"e6558c12-d59f-4593-9605-a7dc6c19e766","Type":"ContainerStarted","Data":"3abc29c3f726cecec1de479eeac605c56df2991154c12696d6ad9119b7b76cb4"} Dec 15 05:49:32 crc kubenswrapper[4747]: I1215 05:49:32.822969 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-5fdd9786f7-dg8cj" Dec 15 05:49:32 crc kubenswrapper[4747]: I1215 05:49:32.825768 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-59b8dcb766-tcs4c" event={"ID":"07926291-631c-415d-8aaa-c425852decd9","Type":"ContainerStarted","Data":"b700e82412f43fc26504f35fd74f2bd32e940542d899116bed67fb574357844b"} Dec 15 05:49:32 crc kubenswrapper[4747]: I1215 05:49:32.826363 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-59b8dcb766-tcs4c" Dec 15 05:49:32 crc kubenswrapper[4747]: I1215 05:49:32.830781 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-95949466-pzsnr" event={"ID":"966a3797-97c2-4e8d-8799-6b8a287efd78","Type":"ContainerStarted","Data":"d774f79320a54289beb9cb0ff320292fec4e4ddac4444407c5a9bcb0ff52e151"} Dec 15 05:49:32 crc kubenswrapper[4747]: I1215 05:49:32.831194 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-95949466-pzsnr" Dec 15 05:49:32 crc kubenswrapper[4747]: I1215 05:49:32.831786 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-snvkz" podStartSLOduration=3.265520415 podStartE2EDuration="15.831774474s" podCreationTimestamp="2025-12-15 05:49:17 +0000 UTC" firstStartedPulling="2025-12-15 05:49:19.33154186 +0000 UTC m=+723.028053777" lastFinishedPulling="2025-12-15 05:49:31.897795919 +0000 UTC m=+735.594307836" observedRunningTime="2025-12-15 05:49:32.828827532 +0000 UTC m=+736.525339449" watchObservedRunningTime="2025-12-15 05:49:32.831774474 +0000 UTC m=+736.528286391" Dec 15 05:49:32 crc kubenswrapper[4747]: I1215 05:49:32.832439 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-qw6tr" event={"ID":"5f14ea23-34de-4d4b-971d-dc90d34c44a9","Type":"ContainerStarted","Data":"ed68b2af5d395b6aef240c4dbc5e1265008fd3d064fc451631f08790bd08a626"} Dec 15 05:49:32 crc kubenswrapper[4747]: I1215 05:49:32.832806 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-qw6tr" Dec 15 05:49:32 crc kubenswrapper[4747]: I1215 05:49:32.834668 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-sffcl" event={"ID":"5a07861b-82a4-47c3-8255-3b76b44da9d6","Type":"ContainerStarted","Data":"659d4443b4809cb29c99fb69f3fd2be5fb9edaeb5515166bc01fae152c80dfb2"} Dec 15 05:49:32 crc kubenswrapper[4747]: I1215 05:49:32.834808 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-sffcl" Dec 15 05:49:32 crc kubenswrapper[4747]: I1215 05:49:32.835839 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-rkqrw" event={"ID":"a9d4c90d-ecd6-4126-8d91-dfb784a64d54","Type":"ContainerStarted","Data":"4784a1d3b2d13dcf7ae7ada85225c6c11dc727bd9627e9a8e372d4523921a3f4"} Dec 15 05:49:32 crc kubenswrapper[4747]: I1215 05:49:32.836240 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-rkqrw" Dec 15 05:49:32 crc kubenswrapper[4747]: I1215 05:49:32.848902 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8665b56d78-c2gjc" podStartSLOduration=2.332195035 podStartE2EDuration="14.848892924s" podCreationTimestamp="2025-12-15 05:49:18 +0000 UTC" firstStartedPulling="2025-12-15 05:49:19.321963709 +0000 UTC m=+723.018475626" lastFinishedPulling="2025-12-15 05:49:31.838661608 +0000 UTC m=+735.535173515" observedRunningTime="2025-12-15 05:49:32.845466051 +0000 UTC m=+736.541977967" watchObservedRunningTime="2025-12-15 05:49:32.848892924 +0000 UTC m=+736.545404842" Dec 15 05:49:32 crc kubenswrapper[4747]: I1215 05:49:32.864992 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-f458558d7-vf58x" podStartSLOduration=3.663172704 podStartE2EDuration="15.864982922s" podCreationTimestamp="2025-12-15 05:49:17 +0000 UTC" firstStartedPulling="2025-12-15 05:49:19.196719939 +0000 UTC m=+722.893231857" lastFinishedPulling="2025-12-15 05:49:31.398530157 +0000 UTC m=+735.095042075" observedRunningTime="2025-12-15 05:49:32.861360169 +0000 UTC m=+736.557872086" watchObservedRunningTime="2025-12-15 05:49:32.864982922 +0000 UTC m=+736.561494839" Dec 15 05:49:32 crc kubenswrapper[4747]: I1215 05:49:32.929305 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-767f9d7567-hk9c4" podStartSLOduration=3.20629937 podStartE2EDuration="15.929282286s" podCreationTimestamp="2025-12-15 05:49:17 +0000 UTC" firstStartedPulling="2025-12-15 05:49:19.10830711 +0000 UTC m=+722.804819027" lastFinishedPulling="2025-12-15 05:49:31.831290037 +0000 UTC m=+735.527801943" observedRunningTime="2025-12-15 05:49:32.880863585 +0000 UTC m=+736.577375502" watchObservedRunningTime="2025-12-15 05:49:32.929282286 +0000 UTC m=+736.625794203" Dec 15 05:49:32 crc kubenswrapper[4747]: I1215 05:49:32.941544 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-6ccf486b9-cmgcn" podStartSLOduration=3.005050525 podStartE2EDuration="15.941500603s" podCreationTimestamp="2025-12-15 05:49:17 +0000 UTC" firstStartedPulling="2025-12-15 05:49:18.909910929 +0000 UTC m=+722.606422846" lastFinishedPulling="2025-12-15 05:49:31.846361008 +0000 UTC m=+735.542872924" observedRunningTime="2025-12-15 05:49:32.905128575 +0000 UTC m=+736.601640492" watchObservedRunningTime="2025-12-15 05:49:32.941500603 +0000 UTC m=+736.638012519" Dec 15 05:49:32 crc kubenswrapper[4747]: I1215 05:49:32.948032 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-qw6tr" podStartSLOduration=3.202234886 podStartE2EDuration="15.948017677s" podCreationTimestamp="2025-12-15 05:49:17 +0000 UTC" firstStartedPulling="2025-12-15 05:49:19.104309092 +0000 UTC m=+722.800821009" lastFinishedPulling="2025-12-15 05:49:31.850091892 +0000 UTC m=+735.546603800" observedRunningTime="2025-12-15 05:49:32.929402171 +0000 UTC m=+736.625914088" watchObservedRunningTime="2025-12-15 05:49:32.948017677 +0000 UTC m=+736.644529594" Dec 15 05:49:32 crc kubenswrapper[4747]: I1215 05:49:32.951752 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-5fdd9786f7-dg8cj" podStartSLOduration=3.332433122 podStartE2EDuration="15.951740528s" podCreationTimestamp="2025-12-15 05:49:17 +0000 UTC" firstStartedPulling="2025-12-15 05:49:19.222151994 +0000 UTC m=+722.918663911" lastFinishedPulling="2025-12-15 05:49:31.841459409 +0000 UTC m=+735.537971317" observedRunningTime="2025-12-15 05:49:32.945220838 +0000 UTC m=+736.641732755" watchObservedRunningTime="2025-12-15 05:49:32.951740528 +0000 UTC m=+736.648252445" Dec 15 05:49:32 crc kubenswrapper[4747]: I1215 05:49:32.982781 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-f76f4954c-5chln" podStartSLOduration=3.401338047 podStartE2EDuration="15.982762494s" podCreationTimestamp="2025-12-15 05:49:17 +0000 UTC" firstStartedPulling="2025-12-15 05:49:19.223263104 +0000 UTC m=+722.919775021" lastFinishedPulling="2025-12-15 05:49:31.80468755 +0000 UTC m=+735.501199468" observedRunningTime="2025-12-15 05:49:32.969577411 +0000 UTC m=+736.666089328" watchObservedRunningTime="2025-12-15 05:49:32.982762494 +0000 UTC m=+736.679274411" Dec 15 05:49:32 crc kubenswrapper[4747]: I1215 05:49:32.998778 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-5c7cbf548f-v7cjm" podStartSLOduration=3.387223444 podStartE2EDuration="15.998724761s" podCreationTimestamp="2025-12-15 05:49:17 +0000 UTC" firstStartedPulling="2025-12-15 05:49:19.220754506 +0000 UTC m=+722.917266423" lastFinishedPulling="2025-12-15 05:49:31.832255823 +0000 UTC m=+735.528767740" observedRunningTime="2025-12-15 05:49:32.994203029 +0000 UTC m=+736.690714946" watchObservedRunningTime="2025-12-15 05:49:32.998724761 +0000 UTC m=+736.695236678" Dec 15 05:49:33 crc kubenswrapper[4747]: I1215 05:49:33.033509 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-sffcl" podStartSLOduration=3.411421178 podStartE2EDuration="16.033490317s" podCreationTimestamp="2025-12-15 05:49:17 +0000 UTC" firstStartedPulling="2025-12-15 05:49:19.214277486 +0000 UTC m=+722.910789404" lastFinishedPulling="2025-12-15 05:49:31.836346625 +0000 UTC m=+735.532858543" observedRunningTime="2025-12-15 05:49:33.02840752 +0000 UTC m=+736.724919437" watchObservedRunningTime="2025-12-15 05:49:33.033490317 +0000 UTC m=+736.730002235" Dec 15 05:49:33 crc kubenswrapper[4747]: I1215 05:49:33.048452 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-95949466-pzsnr" podStartSLOduration=2.870783437 podStartE2EDuration="16.048436834s" podCreationTimestamp="2025-12-15 05:49:17 +0000 UTC" firstStartedPulling="2025-12-15 05:49:18.62616531 +0000 UTC m=+722.322677227" lastFinishedPulling="2025-12-15 05:49:31.803818707 +0000 UTC m=+735.500330624" observedRunningTime="2025-12-15 05:49:33.042983269 +0000 UTC m=+736.739495186" watchObservedRunningTime="2025-12-15 05:49:33.048436834 +0000 UTC m=+736.744948742" Dec 15 05:49:33 crc kubenswrapper[4747]: I1215 05:49:33.070984 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-rkqrw" podStartSLOduration=2.944776058 podStartE2EDuration="16.07097141s" podCreationTimestamp="2025-12-15 05:49:17 +0000 UTC" firstStartedPulling="2025-12-15 05:49:18.705879561 +0000 UTC m=+722.402391478" lastFinishedPulling="2025-12-15 05:49:31.832074913 +0000 UTC m=+735.528586830" observedRunningTime="2025-12-15 05:49:33.065486065 +0000 UTC m=+736.761997972" watchObservedRunningTime="2025-12-15 05:49:33.07097141 +0000 UTC m=+736.767483317" Dec 15 05:49:33 crc kubenswrapper[4747]: I1215 05:49:33.094568 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-59b8dcb766-tcs4c" podStartSLOduration=3.120089413 podStartE2EDuration="16.09455535s" podCreationTimestamp="2025-12-15 05:49:17 +0000 UTC" firstStartedPulling="2025-12-15 05:49:18.858438416 +0000 UTC m=+722.554950333" lastFinishedPulling="2025-12-15 05:49:31.832904352 +0000 UTC m=+735.529416270" observedRunningTime="2025-12-15 05:49:33.090775852 +0000 UTC m=+736.787287769" watchObservedRunningTime="2025-12-15 05:49:33.09455535 +0000 UTC m=+736.791067267" Dec 15 05:49:33 crc kubenswrapper[4747]: I1215 05:49:33.819112 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb8f1731-54b2-4d71-96fb-13fde067045b-cert\") pod \"infra-operator-controller-manager-58944d7758-s79wq\" (UID: \"bb8f1731-54b2-4d71-96fb-13fde067045b\") " pod="openstack-operators/infra-operator-controller-manager-58944d7758-s79wq" Dec 15 05:49:33 crc kubenswrapper[4747]: I1215 05:49:33.831494 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb8f1731-54b2-4d71-96fb-13fde067045b-cert\") pod \"infra-operator-controller-manager-58944d7758-s79wq\" (UID: \"bb8f1731-54b2-4d71-96fb-13fde067045b\") " pod="openstack-operators/infra-operator-controller-manager-58944d7758-s79wq" Dec 15 05:49:34 crc kubenswrapper[4747]: I1215 05:49:34.122681 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3858e881-df69-47eb-8a78-fa48f7ca7f87-cert\") pod \"openstack-baremetal-operator-controller-manager-689f887b54sfqvx\" (UID: \"3858e881-df69-47eb-8a78-fa48f7ca7f87\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-689f887b54sfqvx" Dec 15 05:49:34 crc kubenswrapper[4747]: I1215 05:49:34.128388 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-58944d7758-s79wq" Dec 15 05:49:34 crc kubenswrapper[4747]: I1215 05:49:34.129338 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3858e881-df69-47eb-8a78-fa48f7ca7f87-cert\") pod \"openstack-baremetal-operator-controller-manager-689f887b54sfqvx\" (UID: \"3858e881-df69-47eb-8a78-fa48f7ca7f87\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-689f887b54sfqvx" Dec 15 05:49:34 crc kubenswrapper[4747]: I1215 05:49:34.386120 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-689f887b54sfqvx" Dec 15 05:49:34 crc kubenswrapper[4747]: I1215 05:49:34.534905 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e3f1bf4c-044b-49d5-be51-b853e2f6a7b0-metrics-certs\") pod \"openstack-operator-controller-manager-56f6fbdf6-ch5s4\" (UID: \"e3f1bf4c-044b-49d5-be51-b853e2f6a7b0\") " pod="openstack-operators/openstack-operator-controller-manager-56f6fbdf6-ch5s4" Dec 15 05:49:34 crc kubenswrapper[4747]: I1215 05:49:34.535306 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e3f1bf4c-044b-49d5-be51-b853e2f6a7b0-webhook-certs\") pod \"openstack-operator-controller-manager-56f6fbdf6-ch5s4\" (UID: \"e3f1bf4c-044b-49d5-be51-b853e2f6a7b0\") " pod="openstack-operators/openstack-operator-controller-manager-56f6fbdf6-ch5s4" Dec 15 05:49:34 crc kubenswrapper[4747]: E1215 05:49:34.535502 4747 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 15 05:49:34 crc kubenswrapper[4747]: E1215 05:49:34.535568 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3f1bf4c-044b-49d5-be51-b853e2f6a7b0-webhook-certs podName:e3f1bf4c-044b-49d5-be51-b853e2f6a7b0 nodeName:}" failed. No retries permitted until 2025-12-15 05:49:50.53554952 +0000 UTC m=+754.232061437 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e3f1bf4c-044b-49d5-be51-b853e2f6a7b0-webhook-certs") pod "openstack-operator-controller-manager-56f6fbdf6-ch5s4" (UID: "e3f1bf4c-044b-49d5-be51-b853e2f6a7b0") : secret "webhook-server-cert" not found Dec 15 05:49:34 crc kubenswrapper[4747]: I1215 05:49:34.541248 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-58944d7758-s79wq"] Dec 15 05:49:34 crc kubenswrapper[4747]: W1215 05:49:34.541620 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb8f1731_54b2_4d71_96fb_13fde067045b.slice/crio-69211149fd0ae8adb5b29489f6345cb2018cb603477917ddf953e50a39ec8bf5 WatchSource:0}: Error finding container 69211149fd0ae8adb5b29489f6345cb2018cb603477917ddf953e50a39ec8bf5: Status 404 returned error can't find the container with id 69211149fd0ae8adb5b29489f6345cb2018cb603477917ddf953e50a39ec8bf5 Dec 15 05:49:34 crc kubenswrapper[4747]: I1215 05:49:34.541798 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e3f1bf4c-044b-49d5-be51-b853e2f6a7b0-metrics-certs\") pod \"openstack-operator-controller-manager-56f6fbdf6-ch5s4\" (UID: \"e3f1bf4c-044b-49d5-be51-b853e2f6a7b0\") " pod="openstack-operators/openstack-operator-controller-manager-56f6fbdf6-ch5s4" Dec 15 05:49:34 crc kubenswrapper[4747]: I1215 05:49:34.794449 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-689f887b54sfqvx"] Dec 15 05:49:34 crc kubenswrapper[4747]: W1215 05:49:34.799638 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3858e881_df69_47eb_8a78_fa48f7ca7f87.slice/crio-f3363791af560ab116381fcf6ce4fab2cca88dbaf44472858a191f1b27466ea7 WatchSource:0}: Error finding container f3363791af560ab116381fcf6ce4fab2cca88dbaf44472858a191f1b27466ea7: Status 404 returned error can't find the container with id f3363791af560ab116381fcf6ce4fab2cca88dbaf44472858a191f1b27466ea7 Dec 15 05:49:34 crc kubenswrapper[4747]: I1215 05:49:34.854614 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-58944d7758-s79wq" event={"ID":"bb8f1731-54b2-4d71-96fb-13fde067045b","Type":"ContainerStarted","Data":"69211149fd0ae8adb5b29489f6345cb2018cb603477917ddf953e50a39ec8bf5"} Dec 15 05:49:34 crc kubenswrapper[4747]: I1215 05:49:34.855889 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-689f887b54sfqvx" event={"ID":"3858e881-df69-47eb-8a78-fa48f7ca7f87","Type":"ContainerStarted","Data":"f3363791af560ab116381fcf6ce4fab2cca88dbaf44472858a191f1b27466ea7"} Dec 15 05:49:38 crc kubenswrapper[4747]: I1215 05:49:38.124684 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-95949466-pzsnr" Dec 15 05:49:38 crc kubenswrapper[4747]: I1215 05:49:38.151624 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-rkqrw" Dec 15 05:49:38 crc kubenswrapper[4747]: I1215 05:49:38.179562 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-767f9d7567-hk9c4" Dec 15 05:49:38 crc kubenswrapper[4747]: I1215 05:49:38.204724 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-59b8dcb766-tcs4c" Dec 15 05:49:38 crc kubenswrapper[4747]: I1215 05:49:38.211883 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-6ccf486b9-cmgcn" Dec 15 05:49:38 crc kubenswrapper[4747]: I1215 05:49:38.275618 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-f458558d7-vf58x" Dec 15 05:49:38 crc kubenswrapper[4747]: I1215 05:49:38.286101 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-5c7cbf548f-v7cjm" Dec 15 05:49:38 crc kubenswrapper[4747]: I1215 05:49:38.313095 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-5fdd9786f7-dg8cj" Dec 15 05:49:38 crc kubenswrapper[4747]: I1215 05:49:38.341751 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-qw6tr" Dec 15 05:49:38 crc kubenswrapper[4747]: I1215 05:49:38.348245 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-f76f4954c-5chln" Dec 15 05:49:38 crc kubenswrapper[4747]: I1215 05:49:38.417317 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-snvkz" Dec 15 05:49:38 crc kubenswrapper[4747]: I1215 05:49:38.426568 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-sffcl" Dec 15 05:49:38 crc kubenswrapper[4747]: I1215 05:49:38.544368 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8665b56d78-c2gjc" Dec 15 05:49:38 crc kubenswrapper[4747]: I1215 05:49:38.655614 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-756ccf86c7-6dlgk" Dec 15 05:49:38 crc kubenswrapper[4747]: I1215 05:49:38.884635 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5cf45c46bd-ggkl6" event={"ID":"50d161a9-2162-4642-bfd4-74bde1129134","Type":"ContainerStarted","Data":"109823041fbcb0f338fa98e5ed7318e6c3f30f05fa398374822a898c33b01853"} Dec 15 05:49:38 crc kubenswrapper[4747]: I1215 05:49:38.884854 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5cf45c46bd-ggkl6" Dec 15 05:49:38 crc kubenswrapper[4747]: I1215 05:49:38.886240 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-97d456b9-gqlwk" event={"ID":"df77558c-ad92-43a1-9d9a-e3fac782b0e8","Type":"ContainerStarted","Data":"2cecdd05c98f53673e0257692fcdcb8cb09cd15c2c0c8379045acba005c92291"} Dec 15 05:49:38 crc kubenswrapper[4747]: I1215 05:49:38.886562 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-97d456b9-gqlwk" Dec 15 05:49:38 crc kubenswrapper[4747]: I1215 05:49:38.900000 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5cf45c46bd-ggkl6" podStartSLOduration=3.3187423369999998 podStartE2EDuration="21.899983357s" podCreationTimestamp="2025-12-15 05:49:17 +0000 UTC" firstStartedPulling="2025-12-15 05:49:19.35151526 +0000 UTC m=+723.048027167" lastFinishedPulling="2025-12-15 05:49:37.932756271 +0000 UTC m=+741.629268187" observedRunningTime="2025-12-15 05:49:38.895898456 +0000 UTC m=+742.592410374" watchObservedRunningTime="2025-12-15 05:49:38.899983357 +0000 UTC m=+742.596495275" Dec 15 05:49:38 crc kubenswrapper[4747]: I1215 05:49:38.912266 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-97d456b9-gqlwk" podStartSLOduration=2.562420491 podStartE2EDuration="20.912250196s" podCreationTimestamp="2025-12-15 05:49:18 +0000 UTC" firstStartedPulling="2025-12-15 05:49:19.588152574 +0000 UTC m=+723.284664490" lastFinishedPulling="2025-12-15 05:49:37.937982278 +0000 UTC m=+741.634494195" observedRunningTime="2025-12-15 05:49:38.906217631 +0000 UTC m=+742.602729548" watchObservedRunningTime="2025-12-15 05:49:38.912250196 +0000 UTC m=+742.608762113" Dec 15 05:49:42 crc kubenswrapper[4747]: I1215 05:49:42.917578 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5c6df8f9-tm9tq" event={"ID":"4e1be8a6-df60-418b-911f-efbf8aa5cf5a","Type":"ContainerStarted","Data":"e944d9137ef5c3aca3342453d4853e0e4f6a9651bfbdf96fcdba790fdfd55f3d"} Dec 15 05:49:42 crc kubenswrapper[4747]: I1215 05:49:42.918315 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-5c6df8f9-tm9tq" Dec 15 05:49:42 crc kubenswrapper[4747]: I1215 05:49:42.919560 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-689f887b54sfqvx" event={"ID":"3858e881-df69-47eb-8a78-fa48f7ca7f87","Type":"ContainerStarted","Data":"611462f9d2f7d74769a5f36c7f81e88ddeb5ed7e2186bde5732e13dac818e4b8"} Dec 15 05:49:42 crc kubenswrapper[4747]: I1215 05:49:42.919707 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-689f887b54sfqvx" Dec 15 05:49:42 crc kubenswrapper[4747]: I1215 05:49:42.921369 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-58944d7758-s79wq" event={"ID":"bb8f1731-54b2-4d71-96fb-13fde067045b","Type":"ContainerStarted","Data":"0f48ec1befbf7eefe1f3beae888cb37c517af20efd1935c105d210ba19d090db"} Dec 15 05:49:42 crc kubenswrapper[4747]: I1215 05:49:42.921753 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-58944d7758-s79wq" Dec 15 05:49:42 crc kubenswrapper[4747]: I1215 05:49:42.923336 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-55f78b7c4c-rgxgj" event={"ID":"2e8d5dd7-baa6-49fb-9f9f-735905ac6e61","Type":"ContainerStarted","Data":"2ce62e1f3ed86bae42bdc488c594d4c844dd327f631ccaa090402758c2f4ab4b"} Dec 15 05:49:42 crc kubenswrapper[4747]: I1215 05:49:42.923531 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-55f78b7c4c-rgxgj" Dec 15 05:49:42 crc kubenswrapper[4747]: I1215 05:49:42.925442 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vtm79" event={"ID":"3818fc80-b8e4-4dc2-9470-587cf10a2350","Type":"ContainerStarted","Data":"1c7c3bb60418644de9df8ffe5ee0a9a206dbca78ddeec21b62148daa0588b874"} Dec 15 05:49:42 crc kubenswrapper[4747]: I1215 05:49:42.927137 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jmxtj" event={"ID":"e1cafba6-81fa-4f70-b79d-4d02cdd194a3","Type":"ContainerStarted","Data":"86cc3545e34c1227e86c6d9cc58c688acf3f8145f0da4c3c43223517efebac93"} Dec 15 05:49:42 crc kubenswrapper[4747]: I1215 05:49:42.927319 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jmxtj" Dec 15 05:49:42 crc kubenswrapper[4747]: I1215 05:49:42.937083 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-5c6df8f9-tm9tq" podStartSLOduration=1.855014525 podStartE2EDuration="24.937068212s" podCreationTimestamp="2025-12-15 05:49:18 +0000 UTC" firstStartedPulling="2025-12-15 05:49:19.497097997 +0000 UTC m=+723.193609914" lastFinishedPulling="2025-12-15 05:49:42.579151684 +0000 UTC m=+746.275663601" observedRunningTime="2025-12-15 05:49:42.934772403 +0000 UTC m=+746.631284321" watchObservedRunningTime="2025-12-15 05:49:42.937068212 +0000 UTC m=+746.633580128" Dec 15 05:49:42 crc kubenswrapper[4747]: I1215 05:49:42.950382 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-55f78b7c4c-rgxgj" podStartSLOduration=1.874623057 podStartE2EDuration="24.950365356s" podCreationTimestamp="2025-12-15 05:49:18 +0000 UTC" firstStartedPulling="2025-12-15 05:49:19.488920669 +0000 UTC m=+723.185432586" lastFinishedPulling="2025-12-15 05:49:42.564662968 +0000 UTC m=+746.261174885" observedRunningTime="2025-12-15 05:49:42.947415218 +0000 UTC m=+746.643927145" watchObservedRunningTime="2025-12-15 05:49:42.950365356 +0000 UTC m=+746.646877273" Dec 15 05:49:42 crc kubenswrapper[4747]: I1215 05:49:42.978646 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-689f887b54sfqvx" podStartSLOduration=17.203965886 podStartE2EDuration="24.978601053s" podCreationTimestamp="2025-12-15 05:49:18 +0000 UTC" firstStartedPulling="2025-12-15 05:49:34.801797645 +0000 UTC m=+738.498309562" lastFinishedPulling="2025-12-15 05:49:42.576432812 +0000 UTC m=+746.272944729" observedRunningTime="2025-12-15 05:49:42.970636686 +0000 UTC m=+746.667148603" watchObservedRunningTime="2025-12-15 05:49:42.978601053 +0000 UTC m=+746.675112971" Dec 15 05:49:42 crc kubenswrapper[4747]: I1215 05:49:42.992962 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vtm79" podStartSLOduration=1.733240549 podStartE2EDuration="24.992920531s" podCreationTimestamp="2025-12-15 05:49:18 +0000 UTC" firstStartedPulling="2025-12-15 05:49:19.34979793 +0000 UTC m=+723.046309847" lastFinishedPulling="2025-12-15 05:49:42.609477912 +0000 UTC m=+746.305989829" observedRunningTime="2025-12-15 05:49:42.987171921 +0000 UTC m=+746.683683838" watchObservedRunningTime="2025-12-15 05:49:42.992920531 +0000 UTC m=+746.689432448" Dec 15 05:49:43 crc kubenswrapper[4747]: I1215 05:49:43.005771 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-58944d7758-s79wq" podStartSLOduration=17.974682854 podStartE2EDuration="26.005761448s" podCreationTimestamp="2025-12-15 05:49:17 +0000 UTC" firstStartedPulling="2025-12-15 05:49:34.543972649 +0000 UTC m=+738.240484566" lastFinishedPulling="2025-12-15 05:49:42.575051244 +0000 UTC m=+746.271563160" observedRunningTime="2025-12-15 05:49:43.003298466 +0000 UTC m=+746.699810383" watchObservedRunningTime="2025-12-15 05:49:43.005761448 +0000 UTC m=+746.702273365" Dec 15 05:49:43 crc kubenswrapper[4747]: I1215 05:49:43.023548 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jmxtj" podStartSLOduration=1.756885004 podStartE2EDuration="25.023532817s" podCreationTimestamp="2025-12-15 05:49:18 +0000 UTC" firstStartedPulling="2025-12-15 05:49:19.339213236 +0000 UTC m=+723.035725154" lastFinishedPulling="2025-12-15 05:49:42.60586105 +0000 UTC m=+746.302372967" observedRunningTime="2025-12-15 05:49:43.018607755 +0000 UTC m=+746.715119672" watchObservedRunningTime="2025-12-15 05:49:43.023532817 +0000 UTC m=+746.720044734" Dec 15 05:49:43 crc kubenswrapper[4747]: I1215 05:49:43.523944 4747 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 15 05:49:48 crc kubenswrapper[4747]: I1215 05:49:48.428866 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5cf45c46bd-ggkl6" Dec 15 05:49:48 crc kubenswrapper[4747]: I1215 05:49:48.524030 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-jmxtj" Dec 15 05:49:48 crc kubenswrapper[4747]: I1215 05:49:48.575076 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-5c6df8f9-tm9tq" Dec 15 05:49:48 crc kubenswrapper[4747]: I1215 05:49:48.748347 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-55f78b7c4c-rgxgj" Dec 15 05:49:48 crc kubenswrapper[4747]: I1215 05:49:48.886537 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-97d456b9-gqlwk" Dec 15 05:49:50 crc kubenswrapper[4747]: I1215 05:49:50.567024 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e3f1bf4c-044b-49d5-be51-b853e2f6a7b0-webhook-certs\") pod \"openstack-operator-controller-manager-56f6fbdf6-ch5s4\" (UID: \"e3f1bf4c-044b-49d5-be51-b853e2f6a7b0\") " pod="openstack-operators/openstack-operator-controller-manager-56f6fbdf6-ch5s4" Dec 15 05:49:50 crc kubenswrapper[4747]: I1215 05:49:50.573556 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e3f1bf4c-044b-49d5-be51-b853e2f6a7b0-webhook-certs\") pod \"openstack-operator-controller-manager-56f6fbdf6-ch5s4\" (UID: \"e3f1bf4c-044b-49d5-be51-b853e2f6a7b0\") " pod="openstack-operators/openstack-operator-controller-manager-56f6fbdf6-ch5s4" Dec 15 05:49:50 crc kubenswrapper[4747]: I1215 05:49:50.752532 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-56f6fbdf6-ch5s4" Dec 15 05:49:51 crc kubenswrapper[4747]: I1215 05:49:51.132772 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-56f6fbdf6-ch5s4"] Dec 15 05:49:51 crc kubenswrapper[4747]: W1215 05:49:51.137878 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3f1bf4c_044b_49d5_be51_b853e2f6a7b0.slice/crio-44cb68b512d99e4e18b579a93ec564ee21b1072585e1395f11bdb12d588413ab WatchSource:0}: Error finding container 44cb68b512d99e4e18b579a93ec564ee21b1072585e1395f11bdb12d588413ab: Status 404 returned error can't find the container with id 44cb68b512d99e4e18b579a93ec564ee21b1072585e1395f11bdb12d588413ab Dec 15 05:49:51 crc kubenswrapper[4747]: I1215 05:49:51.992399 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-56f6fbdf6-ch5s4" event={"ID":"e3f1bf4c-044b-49d5-be51-b853e2f6a7b0","Type":"ContainerStarted","Data":"cb3b6a8cd3d809624d0a70e957c3732b4788f5e70209d530e6e43ae10b18c5ff"} Dec 15 05:49:51 crc kubenswrapper[4747]: I1215 05:49:51.992850 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-56f6fbdf6-ch5s4" event={"ID":"e3f1bf4c-044b-49d5-be51-b853e2f6a7b0","Type":"ContainerStarted","Data":"44cb68b512d99e4e18b579a93ec564ee21b1072585e1395f11bdb12d588413ab"} Dec 15 05:49:51 crc kubenswrapper[4747]: I1215 05:49:51.992879 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-56f6fbdf6-ch5s4" Dec 15 05:49:52 crc kubenswrapper[4747]: I1215 05:49:52.027211 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-56f6fbdf6-ch5s4" podStartSLOduration=34.027190302 podStartE2EDuration="34.027190302s" podCreationTimestamp="2025-12-15 05:49:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:49:52.017847292 +0000 UTC m=+755.714359219" watchObservedRunningTime="2025-12-15 05:49:52.027190302 +0000 UTC m=+755.723702218" Dec 15 05:49:54 crc kubenswrapper[4747]: I1215 05:49:54.135389 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-58944d7758-s79wq" Dec 15 05:49:54 crc kubenswrapper[4747]: I1215 05:49:54.392448 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-689f887b54sfqvx" Dec 15 05:50:00 crc kubenswrapper[4747]: I1215 05:50:00.759740 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-56f6fbdf6-ch5s4" Dec 15 05:50:16 crc kubenswrapper[4747]: I1215 05:50:16.505042 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-94b4f9f45-ng7wf"] Dec 15 05:50:16 crc kubenswrapper[4747]: I1215 05:50:16.506683 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-94b4f9f45-ng7wf" Dec 15 05:50:16 crc kubenswrapper[4747]: I1215 05:50:16.511426 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 15 05:50:16 crc kubenswrapper[4747]: I1215 05:50:16.511479 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-bk7dn" Dec 15 05:50:16 crc kubenswrapper[4747]: I1215 05:50:16.511651 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 15 05:50:16 crc kubenswrapper[4747]: I1215 05:50:16.511813 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 15 05:50:16 crc kubenswrapper[4747]: I1215 05:50:16.518478 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-94b4f9f45-ng7wf"] Dec 15 05:50:16 crc kubenswrapper[4747]: I1215 05:50:16.560988 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6947456757-77mcq"] Dec 15 05:50:16 crc kubenswrapper[4747]: I1215 05:50:16.564159 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6947456757-77mcq" Dec 15 05:50:16 crc kubenswrapper[4747]: I1215 05:50:16.568286 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 15 05:50:16 crc kubenswrapper[4747]: I1215 05:50:16.577478 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6947456757-77mcq"] Dec 15 05:50:16 crc kubenswrapper[4747]: I1215 05:50:16.610081 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jw5j\" (UniqueName: \"kubernetes.io/projected/c5217f8b-c2c7-4600-8137-b3367ec052ad-kube-api-access-9jw5j\") pod \"dnsmasq-dns-94b4f9f45-ng7wf\" (UID: \"c5217f8b-c2c7-4600-8137-b3367ec052ad\") " pod="openstack/dnsmasq-dns-94b4f9f45-ng7wf" Dec 15 05:50:16 crc kubenswrapper[4747]: I1215 05:50:16.610130 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5217f8b-c2c7-4600-8137-b3367ec052ad-config\") pod \"dnsmasq-dns-94b4f9f45-ng7wf\" (UID: \"c5217f8b-c2c7-4600-8137-b3367ec052ad\") " pod="openstack/dnsmasq-dns-94b4f9f45-ng7wf" Dec 15 05:50:16 crc kubenswrapper[4747]: I1215 05:50:16.610217 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ba5bb5e-fcb8-46c5-861c-b0e72dee5308-config\") pod \"dnsmasq-dns-6947456757-77mcq\" (UID: \"2ba5bb5e-fcb8-46c5-861c-b0e72dee5308\") " pod="openstack/dnsmasq-dns-6947456757-77mcq" Dec 15 05:50:16 crc kubenswrapper[4747]: I1215 05:50:16.610360 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpsrz\" (UniqueName: \"kubernetes.io/projected/2ba5bb5e-fcb8-46c5-861c-b0e72dee5308-kube-api-access-cpsrz\") pod \"dnsmasq-dns-6947456757-77mcq\" (UID: \"2ba5bb5e-fcb8-46c5-861c-b0e72dee5308\") " pod="openstack/dnsmasq-dns-6947456757-77mcq" Dec 15 05:50:16 crc kubenswrapper[4747]: I1215 05:50:16.610443 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ba5bb5e-fcb8-46c5-861c-b0e72dee5308-dns-svc\") pod \"dnsmasq-dns-6947456757-77mcq\" (UID: \"2ba5bb5e-fcb8-46c5-861c-b0e72dee5308\") " pod="openstack/dnsmasq-dns-6947456757-77mcq" Dec 15 05:50:16 crc kubenswrapper[4747]: I1215 05:50:16.712363 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jw5j\" (UniqueName: \"kubernetes.io/projected/c5217f8b-c2c7-4600-8137-b3367ec052ad-kube-api-access-9jw5j\") pod \"dnsmasq-dns-94b4f9f45-ng7wf\" (UID: \"c5217f8b-c2c7-4600-8137-b3367ec052ad\") " pod="openstack/dnsmasq-dns-94b4f9f45-ng7wf" Dec 15 05:50:16 crc kubenswrapper[4747]: I1215 05:50:16.712661 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5217f8b-c2c7-4600-8137-b3367ec052ad-config\") pod \"dnsmasq-dns-94b4f9f45-ng7wf\" (UID: \"c5217f8b-c2c7-4600-8137-b3367ec052ad\") " pod="openstack/dnsmasq-dns-94b4f9f45-ng7wf" Dec 15 05:50:16 crc kubenswrapper[4747]: I1215 05:50:16.712829 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ba5bb5e-fcb8-46c5-861c-b0e72dee5308-config\") pod \"dnsmasq-dns-6947456757-77mcq\" (UID: \"2ba5bb5e-fcb8-46c5-861c-b0e72dee5308\") " pod="openstack/dnsmasq-dns-6947456757-77mcq" Dec 15 05:50:16 crc kubenswrapper[4747]: I1215 05:50:16.712989 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpsrz\" (UniqueName: \"kubernetes.io/projected/2ba5bb5e-fcb8-46c5-861c-b0e72dee5308-kube-api-access-cpsrz\") pod \"dnsmasq-dns-6947456757-77mcq\" (UID: \"2ba5bb5e-fcb8-46c5-861c-b0e72dee5308\") " pod="openstack/dnsmasq-dns-6947456757-77mcq" Dec 15 05:50:16 crc kubenswrapper[4747]: I1215 05:50:16.713718 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ba5bb5e-fcb8-46c5-861c-b0e72dee5308-dns-svc\") pod \"dnsmasq-dns-6947456757-77mcq\" (UID: \"2ba5bb5e-fcb8-46c5-861c-b0e72dee5308\") " pod="openstack/dnsmasq-dns-6947456757-77mcq" Dec 15 05:50:16 crc kubenswrapper[4747]: I1215 05:50:16.716222 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 15 05:50:16 crc kubenswrapper[4747]: I1215 05:50:16.716264 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 15 05:50:16 crc kubenswrapper[4747]: I1215 05:50:16.724168 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5217f8b-c2c7-4600-8137-b3367ec052ad-config\") pod \"dnsmasq-dns-94b4f9f45-ng7wf\" (UID: \"c5217f8b-c2c7-4600-8137-b3367ec052ad\") " pod="openstack/dnsmasq-dns-94b4f9f45-ng7wf" Dec 15 05:50:16 crc kubenswrapper[4747]: I1215 05:50:16.724434 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ba5bb5e-fcb8-46c5-861c-b0e72dee5308-config\") pod \"dnsmasq-dns-6947456757-77mcq\" (UID: \"2ba5bb5e-fcb8-46c5-861c-b0e72dee5308\") " pod="openstack/dnsmasq-dns-6947456757-77mcq" Dec 15 05:50:16 crc kubenswrapper[4747]: I1215 05:50:16.724696 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ba5bb5e-fcb8-46c5-861c-b0e72dee5308-dns-svc\") pod \"dnsmasq-dns-6947456757-77mcq\" (UID: \"2ba5bb5e-fcb8-46c5-861c-b0e72dee5308\") " pod="openstack/dnsmasq-dns-6947456757-77mcq" Dec 15 05:50:16 crc kubenswrapper[4747]: I1215 05:50:16.727563 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 15 05:50:16 crc kubenswrapper[4747]: I1215 05:50:16.737614 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 15 05:50:16 crc kubenswrapper[4747]: I1215 05:50:16.752678 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpsrz\" (UniqueName: \"kubernetes.io/projected/2ba5bb5e-fcb8-46c5-861c-b0e72dee5308-kube-api-access-cpsrz\") pod \"dnsmasq-dns-6947456757-77mcq\" (UID: \"2ba5bb5e-fcb8-46c5-861c-b0e72dee5308\") " pod="openstack/dnsmasq-dns-6947456757-77mcq" Dec 15 05:50:16 crc kubenswrapper[4747]: I1215 05:50:16.753141 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jw5j\" (UniqueName: \"kubernetes.io/projected/c5217f8b-c2c7-4600-8137-b3367ec052ad-kube-api-access-9jw5j\") pod \"dnsmasq-dns-94b4f9f45-ng7wf\" (UID: \"c5217f8b-c2c7-4600-8137-b3367ec052ad\") " pod="openstack/dnsmasq-dns-94b4f9f45-ng7wf" Dec 15 05:50:16 crc kubenswrapper[4747]: I1215 05:50:16.834552 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-bk7dn" Dec 15 05:50:16 crc kubenswrapper[4747]: I1215 05:50:16.842847 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-94b4f9f45-ng7wf" Dec 15 05:50:16 crc kubenswrapper[4747]: I1215 05:50:16.882994 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6947456757-77mcq" Dec 15 05:50:17 crc kubenswrapper[4747]: I1215 05:50:17.116359 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6947456757-77mcq"] Dec 15 05:50:17 crc kubenswrapper[4747]: W1215 05:50:17.120406 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ba5bb5e_fcb8_46c5_861c_b0e72dee5308.slice/crio-2ce44351b6402f7d1a058a2e3210e3b9f4d31dc5d95344c38bc19171764e377c WatchSource:0}: Error finding container 2ce44351b6402f7d1a058a2e3210e3b9f4d31dc5d95344c38bc19171764e377c: Status 404 returned error can't find the container with id 2ce44351b6402f7d1a058a2e3210e3b9f4d31dc5d95344c38bc19171764e377c Dec 15 05:50:17 crc kubenswrapper[4747]: I1215 05:50:17.162461 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6947456757-77mcq" event={"ID":"2ba5bb5e-fcb8-46c5-861c-b0e72dee5308","Type":"ContainerStarted","Data":"2ce44351b6402f7d1a058a2e3210e3b9f4d31dc5d95344c38bc19171764e377c"} Dec 15 05:50:17 crc kubenswrapper[4747]: I1215 05:50:17.250037 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-94b4f9f45-ng7wf"] Dec 15 05:50:17 crc kubenswrapper[4747]: W1215 05:50:17.253152 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5217f8b_c2c7_4600_8137_b3367ec052ad.slice/crio-7ae350d14bfae3628aa419b95314c37e7627bad64f609f440e606a1cd8100469 WatchSource:0}: Error finding container 7ae350d14bfae3628aa419b95314c37e7627bad64f609f440e606a1cd8100469: Status 404 returned error can't find the container with id 7ae350d14bfae3628aa419b95314c37e7627bad64f609f440e606a1cd8100469 Dec 15 05:50:18 crc kubenswrapper[4747]: I1215 05:50:18.172652 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-94b4f9f45-ng7wf" event={"ID":"c5217f8b-c2c7-4600-8137-b3367ec052ad","Type":"ContainerStarted","Data":"7ae350d14bfae3628aa419b95314c37e7627bad64f609f440e606a1cd8100469"} Dec 15 05:50:19 crc kubenswrapper[4747]: I1215 05:50:19.698740 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6947456757-77mcq"] Dec 15 05:50:19 crc kubenswrapper[4747]: I1215 05:50:19.724684 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55dc666865-9qwv7"] Dec 15 05:50:19 crc kubenswrapper[4747]: I1215 05:50:19.727240 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55dc666865-9qwv7" Dec 15 05:50:19 crc kubenswrapper[4747]: I1215 05:50:19.736915 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55dc666865-9qwv7"] Dec 15 05:50:19 crc kubenswrapper[4747]: I1215 05:50:19.765876 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdpxn\" (UniqueName: \"kubernetes.io/projected/35d8d030-307c-47c1-97ce-b476cf1d3cf2-kube-api-access-fdpxn\") pod \"dnsmasq-dns-55dc666865-9qwv7\" (UID: \"35d8d030-307c-47c1-97ce-b476cf1d3cf2\") " pod="openstack/dnsmasq-dns-55dc666865-9qwv7" Dec 15 05:50:19 crc kubenswrapper[4747]: I1215 05:50:19.766356 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35d8d030-307c-47c1-97ce-b476cf1d3cf2-config\") pod \"dnsmasq-dns-55dc666865-9qwv7\" (UID: \"35d8d030-307c-47c1-97ce-b476cf1d3cf2\") " pod="openstack/dnsmasq-dns-55dc666865-9qwv7" Dec 15 05:50:19 crc kubenswrapper[4747]: I1215 05:50:19.766470 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35d8d030-307c-47c1-97ce-b476cf1d3cf2-dns-svc\") pod \"dnsmasq-dns-55dc666865-9qwv7\" (UID: \"35d8d030-307c-47c1-97ce-b476cf1d3cf2\") " pod="openstack/dnsmasq-dns-55dc666865-9qwv7" Dec 15 05:50:19 crc kubenswrapper[4747]: I1215 05:50:19.870115 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdpxn\" (UniqueName: \"kubernetes.io/projected/35d8d030-307c-47c1-97ce-b476cf1d3cf2-kube-api-access-fdpxn\") pod \"dnsmasq-dns-55dc666865-9qwv7\" (UID: \"35d8d030-307c-47c1-97ce-b476cf1d3cf2\") " pod="openstack/dnsmasq-dns-55dc666865-9qwv7" Dec 15 05:50:19 crc kubenswrapper[4747]: I1215 05:50:19.872917 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35d8d030-307c-47c1-97ce-b476cf1d3cf2-config\") pod \"dnsmasq-dns-55dc666865-9qwv7\" (UID: \"35d8d030-307c-47c1-97ce-b476cf1d3cf2\") " pod="openstack/dnsmasq-dns-55dc666865-9qwv7" Dec 15 05:50:19 crc kubenswrapper[4747]: I1215 05:50:19.873813 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35d8d030-307c-47c1-97ce-b476cf1d3cf2-config\") pod \"dnsmasq-dns-55dc666865-9qwv7\" (UID: \"35d8d030-307c-47c1-97ce-b476cf1d3cf2\") " pod="openstack/dnsmasq-dns-55dc666865-9qwv7" Dec 15 05:50:19 crc kubenswrapper[4747]: I1215 05:50:19.873051 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35d8d030-307c-47c1-97ce-b476cf1d3cf2-dns-svc\") pod \"dnsmasq-dns-55dc666865-9qwv7\" (UID: \"35d8d030-307c-47c1-97ce-b476cf1d3cf2\") " pod="openstack/dnsmasq-dns-55dc666865-9qwv7" Dec 15 05:50:19 crc kubenswrapper[4747]: I1215 05:50:19.874007 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35d8d030-307c-47c1-97ce-b476cf1d3cf2-dns-svc\") pod \"dnsmasq-dns-55dc666865-9qwv7\" (UID: \"35d8d030-307c-47c1-97ce-b476cf1d3cf2\") " pod="openstack/dnsmasq-dns-55dc666865-9qwv7" Dec 15 05:50:19 crc kubenswrapper[4747]: I1215 05:50:19.900757 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdpxn\" (UniqueName: \"kubernetes.io/projected/35d8d030-307c-47c1-97ce-b476cf1d3cf2-kube-api-access-fdpxn\") pod \"dnsmasq-dns-55dc666865-9qwv7\" (UID: \"35d8d030-307c-47c1-97ce-b476cf1d3cf2\") " pod="openstack/dnsmasq-dns-55dc666865-9qwv7" Dec 15 05:50:19 crc kubenswrapper[4747]: I1215 05:50:19.991125 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-94b4f9f45-ng7wf"] Dec 15 05:50:20 crc kubenswrapper[4747]: I1215 05:50:20.005844 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d9886d5bf-r27p4"] Dec 15 05:50:20 crc kubenswrapper[4747]: I1215 05:50:20.007230 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d9886d5bf-r27p4" Dec 15 05:50:20 crc kubenswrapper[4747]: I1215 05:50:20.013663 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d9886d5bf-r27p4"] Dec 15 05:50:20 crc kubenswrapper[4747]: I1215 05:50:20.044916 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55dc666865-9qwv7" Dec 15 05:50:20 crc kubenswrapper[4747]: I1215 05:50:20.076825 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45ad3e3b-7313-4633-9da2-b644eefbad5a-dns-svc\") pod \"dnsmasq-dns-5d9886d5bf-r27p4\" (UID: \"45ad3e3b-7313-4633-9da2-b644eefbad5a\") " pod="openstack/dnsmasq-dns-5d9886d5bf-r27p4" Dec 15 05:50:20 crc kubenswrapper[4747]: I1215 05:50:20.076913 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7chqg\" (UniqueName: \"kubernetes.io/projected/45ad3e3b-7313-4633-9da2-b644eefbad5a-kube-api-access-7chqg\") pod \"dnsmasq-dns-5d9886d5bf-r27p4\" (UID: \"45ad3e3b-7313-4633-9da2-b644eefbad5a\") " pod="openstack/dnsmasq-dns-5d9886d5bf-r27p4" Dec 15 05:50:20 crc kubenswrapper[4747]: I1215 05:50:20.077000 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45ad3e3b-7313-4633-9da2-b644eefbad5a-config\") pod \"dnsmasq-dns-5d9886d5bf-r27p4\" (UID: \"45ad3e3b-7313-4633-9da2-b644eefbad5a\") " pod="openstack/dnsmasq-dns-5d9886d5bf-r27p4" Dec 15 05:50:20 crc kubenswrapper[4747]: I1215 05:50:20.178500 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45ad3e3b-7313-4633-9da2-b644eefbad5a-config\") pod \"dnsmasq-dns-5d9886d5bf-r27p4\" (UID: \"45ad3e3b-7313-4633-9da2-b644eefbad5a\") " pod="openstack/dnsmasq-dns-5d9886d5bf-r27p4" Dec 15 05:50:20 crc kubenswrapper[4747]: I1215 05:50:20.178962 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45ad3e3b-7313-4633-9da2-b644eefbad5a-dns-svc\") pod \"dnsmasq-dns-5d9886d5bf-r27p4\" (UID: \"45ad3e3b-7313-4633-9da2-b644eefbad5a\") " pod="openstack/dnsmasq-dns-5d9886d5bf-r27p4" Dec 15 05:50:20 crc kubenswrapper[4747]: I1215 05:50:20.178992 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7chqg\" (UniqueName: \"kubernetes.io/projected/45ad3e3b-7313-4633-9da2-b644eefbad5a-kube-api-access-7chqg\") pod \"dnsmasq-dns-5d9886d5bf-r27p4\" (UID: \"45ad3e3b-7313-4633-9da2-b644eefbad5a\") " pod="openstack/dnsmasq-dns-5d9886d5bf-r27p4" Dec 15 05:50:20 crc kubenswrapper[4747]: I1215 05:50:20.179653 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45ad3e3b-7313-4633-9da2-b644eefbad5a-config\") pod \"dnsmasq-dns-5d9886d5bf-r27p4\" (UID: \"45ad3e3b-7313-4633-9da2-b644eefbad5a\") " pod="openstack/dnsmasq-dns-5d9886d5bf-r27p4" Dec 15 05:50:20 crc kubenswrapper[4747]: I1215 05:50:20.180043 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45ad3e3b-7313-4633-9da2-b644eefbad5a-dns-svc\") pod \"dnsmasq-dns-5d9886d5bf-r27p4\" (UID: \"45ad3e3b-7313-4633-9da2-b644eefbad5a\") " pod="openstack/dnsmasq-dns-5d9886d5bf-r27p4" Dec 15 05:50:20 crc kubenswrapper[4747]: I1215 05:50:20.195010 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7chqg\" (UniqueName: \"kubernetes.io/projected/45ad3e3b-7313-4633-9da2-b644eefbad5a-kube-api-access-7chqg\") pod \"dnsmasq-dns-5d9886d5bf-r27p4\" (UID: \"45ad3e3b-7313-4633-9da2-b644eefbad5a\") " pod="openstack/dnsmasq-dns-5d9886d5bf-r27p4" Dec 15 05:50:20 crc kubenswrapper[4747]: I1215 05:50:20.330340 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d9886d5bf-r27p4" Dec 15 05:50:20 crc kubenswrapper[4747]: I1215 05:50:20.488242 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55dc666865-9qwv7"] Dec 15 05:50:20 crc kubenswrapper[4747]: W1215 05:50:20.499349 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35d8d030_307c_47c1_97ce_b476cf1d3cf2.slice/crio-ba08a20f5bdfc9d45706a5cf0e5287d8df4249d7491eff3de55a9b267d2ab414 WatchSource:0}: Error finding container ba08a20f5bdfc9d45706a5cf0e5287d8df4249d7491eff3de55a9b267d2ab414: Status 404 returned error can't find the container with id ba08a20f5bdfc9d45706a5cf0e5287d8df4249d7491eff3de55a9b267d2ab414 Dec 15 05:50:20 crc kubenswrapper[4747]: I1215 05:50:20.746077 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d9886d5bf-r27p4"] Dec 15 05:50:20 crc kubenswrapper[4747]: W1215 05:50:20.749870 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45ad3e3b_7313_4633_9da2_b644eefbad5a.slice/crio-dbedae2b06be0edf8fc7fd1be4b929dc0d5415adfdac7188989abe3b0acc7c2d WatchSource:0}: Error finding container dbedae2b06be0edf8fc7fd1be4b929dc0d5415adfdac7188989abe3b0acc7c2d: Status 404 returned error can't find the container with id dbedae2b06be0edf8fc7fd1be4b929dc0d5415adfdac7188989abe3b0acc7c2d Dec 15 05:50:20 crc kubenswrapper[4747]: I1215 05:50:20.877007 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 15 05:50:20 crc kubenswrapper[4747]: I1215 05:50:20.884677 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 15 05:50:20 crc kubenswrapper[4747]: I1215 05:50:20.887284 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 15 05:50:20 crc kubenswrapper[4747]: I1215 05:50:20.887528 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 15 05:50:20 crc kubenswrapper[4747]: I1215 05:50:20.887888 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 15 05:50:20 crc kubenswrapper[4747]: I1215 05:50:20.887917 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 15 05:50:20 crc kubenswrapper[4747]: I1215 05:50:20.888372 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 15 05:50:20 crc kubenswrapper[4747]: I1215 05:50:20.888863 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-68tlq" Dec 15 05:50:20 crc kubenswrapper[4747]: I1215 05:50:20.896250 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 15 05:50:20 crc kubenswrapper[4747]: I1215 05:50:20.904434 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 15 05:50:20 crc kubenswrapper[4747]: I1215 05:50:20.999562 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc6cq\" (UniqueName: \"kubernetes.io/projected/9bece5e6-b345-4969-a563-81fb3706f8f1-kube-api-access-sc6cq\") pod \"rabbitmq-server-0\" (UID: \"9bece5e6-b345-4969-a563-81fb3706f8f1\") " pod="openstack/rabbitmq-server-0" Dec 15 05:50:20 crc kubenswrapper[4747]: I1215 05:50:20.999676 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9bece5e6-b345-4969-a563-81fb3706f8f1-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9bece5e6-b345-4969-a563-81fb3706f8f1\") " pod="openstack/rabbitmq-server-0" Dec 15 05:50:20 crc kubenswrapper[4747]: I1215 05:50:20.999733 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9bece5e6-b345-4969-a563-81fb3706f8f1-config-data\") pod \"rabbitmq-server-0\" (UID: \"9bece5e6-b345-4969-a563-81fb3706f8f1\") " pod="openstack/rabbitmq-server-0" Dec 15 05:50:20 crc kubenswrapper[4747]: I1215 05:50:20.999758 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9bece5e6-b345-4969-a563-81fb3706f8f1-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9bece5e6-b345-4969-a563-81fb3706f8f1\") " pod="openstack/rabbitmq-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:20.999946 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"9bece5e6-b345-4969-a563-81fb3706f8f1\") " pod="openstack/rabbitmq-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:20.999985 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9bece5e6-b345-4969-a563-81fb3706f8f1-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9bece5e6-b345-4969-a563-81fb3706f8f1\") " pod="openstack/rabbitmq-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.000005 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9bece5e6-b345-4969-a563-81fb3706f8f1-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9bece5e6-b345-4969-a563-81fb3706f8f1\") " pod="openstack/rabbitmq-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.000036 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9bece5e6-b345-4969-a563-81fb3706f8f1-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9bece5e6-b345-4969-a563-81fb3706f8f1\") " pod="openstack/rabbitmq-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.000102 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9bece5e6-b345-4969-a563-81fb3706f8f1-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9bece5e6-b345-4969-a563-81fb3706f8f1\") " pod="openstack/rabbitmq-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.000326 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9bece5e6-b345-4969-a563-81fb3706f8f1-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9bece5e6-b345-4969-a563-81fb3706f8f1\") " pod="openstack/rabbitmq-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.000449 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9bece5e6-b345-4969-a563-81fb3706f8f1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9bece5e6-b345-4969-a563-81fb3706f8f1\") " pod="openstack/rabbitmq-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.102164 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"9bece5e6-b345-4969-a563-81fb3706f8f1\") " pod="openstack/rabbitmq-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.102205 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9bece5e6-b345-4969-a563-81fb3706f8f1-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9bece5e6-b345-4969-a563-81fb3706f8f1\") " pod="openstack/rabbitmq-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.102237 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9bece5e6-b345-4969-a563-81fb3706f8f1-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9bece5e6-b345-4969-a563-81fb3706f8f1\") " pod="openstack/rabbitmq-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.102265 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9bece5e6-b345-4969-a563-81fb3706f8f1-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9bece5e6-b345-4969-a563-81fb3706f8f1\") " pod="openstack/rabbitmq-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.102296 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9bece5e6-b345-4969-a563-81fb3706f8f1-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9bece5e6-b345-4969-a563-81fb3706f8f1\") " pod="openstack/rabbitmq-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.102326 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9bece5e6-b345-4969-a563-81fb3706f8f1-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9bece5e6-b345-4969-a563-81fb3706f8f1\") " pod="openstack/rabbitmq-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.102353 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9bece5e6-b345-4969-a563-81fb3706f8f1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9bece5e6-b345-4969-a563-81fb3706f8f1\") " pod="openstack/rabbitmq-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.102400 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc6cq\" (UniqueName: \"kubernetes.io/projected/9bece5e6-b345-4969-a563-81fb3706f8f1-kube-api-access-sc6cq\") pod \"rabbitmq-server-0\" (UID: \"9bece5e6-b345-4969-a563-81fb3706f8f1\") " pod="openstack/rabbitmq-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.102424 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9bece5e6-b345-4969-a563-81fb3706f8f1-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9bece5e6-b345-4969-a563-81fb3706f8f1\") " pod="openstack/rabbitmq-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.102443 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9bece5e6-b345-4969-a563-81fb3706f8f1-config-data\") pod \"rabbitmq-server-0\" (UID: \"9bece5e6-b345-4969-a563-81fb3706f8f1\") " pod="openstack/rabbitmq-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.102462 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9bece5e6-b345-4969-a563-81fb3706f8f1-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9bece5e6-b345-4969-a563-81fb3706f8f1\") " pod="openstack/rabbitmq-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.103227 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9bece5e6-b345-4969-a563-81fb3706f8f1-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9bece5e6-b345-4969-a563-81fb3706f8f1\") " pod="openstack/rabbitmq-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.104126 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"9bece5e6-b345-4969-a563-81fb3706f8f1\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.104443 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9bece5e6-b345-4969-a563-81fb3706f8f1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9bece5e6-b345-4969-a563-81fb3706f8f1\") " pod="openstack/rabbitmq-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.104887 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9bece5e6-b345-4969-a563-81fb3706f8f1-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9bece5e6-b345-4969-a563-81fb3706f8f1\") " pod="openstack/rabbitmq-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.105236 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9bece5e6-b345-4969-a563-81fb3706f8f1-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9bece5e6-b345-4969-a563-81fb3706f8f1\") " pod="openstack/rabbitmq-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.105726 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9bece5e6-b345-4969-a563-81fb3706f8f1-config-data\") pod \"rabbitmq-server-0\" (UID: \"9bece5e6-b345-4969-a563-81fb3706f8f1\") " pod="openstack/rabbitmq-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.110946 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9bece5e6-b345-4969-a563-81fb3706f8f1-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9bece5e6-b345-4969-a563-81fb3706f8f1\") " pod="openstack/rabbitmq-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.111496 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9bece5e6-b345-4969-a563-81fb3706f8f1-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9bece5e6-b345-4969-a563-81fb3706f8f1\") " pod="openstack/rabbitmq-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.111624 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9bece5e6-b345-4969-a563-81fb3706f8f1-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9bece5e6-b345-4969-a563-81fb3706f8f1\") " pod="openstack/rabbitmq-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.112190 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9bece5e6-b345-4969-a563-81fb3706f8f1-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9bece5e6-b345-4969-a563-81fb3706f8f1\") " pod="openstack/rabbitmq-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.120195 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc6cq\" (UniqueName: \"kubernetes.io/projected/9bece5e6-b345-4969-a563-81fb3706f8f1-kube-api-access-sc6cq\") pod \"rabbitmq-server-0\" (UID: \"9bece5e6-b345-4969-a563-81fb3706f8f1\") " pod="openstack/rabbitmq-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.127393 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"9bece5e6-b345-4969-a563-81fb3706f8f1\") " pod="openstack/rabbitmq-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.150234 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.151680 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.153823 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.153907 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.154375 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.154483 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.156069 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.156750 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.157057 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-25dgb" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.158499 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.196863 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d9886d5bf-r27p4" event={"ID":"45ad3e3b-7313-4633-9da2-b644eefbad5a","Type":"ContainerStarted","Data":"dbedae2b06be0edf8fc7fd1be4b929dc0d5415adfdac7188989abe3b0acc7c2d"} Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.198410 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55dc666865-9qwv7" event={"ID":"35d8d030-307c-47c1-97ce-b476cf1d3cf2","Type":"ContainerStarted","Data":"ba08a20f5bdfc9d45706a5cf0e5287d8df4249d7491eff3de55a9b267d2ab414"} Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.203159 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/65a53faf-94ad-48f3-b8e0-8642376f89ee-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a53faf-94ad-48f3-b8e0-8642376f89ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.203197 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/65a53faf-94ad-48f3-b8e0-8642376f89ee-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a53faf-94ad-48f3-b8e0-8642376f89ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.203218 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/65a53faf-94ad-48f3-b8e0-8642376f89ee-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a53faf-94ad-48f3-b8e0-8642376f89ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.203251 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/65a53faf-94ad-48f3-b8e0-8642376f89ee-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a53faf-94ad-48f3-b8e0-8642376f89ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.203275 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a53faf-94ad-48f3-b8e0-8642376f89ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.203293 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/65a53faf-94ad-48f3-b8e0-8642376f89ee-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a53faf-94ad-48f3-b8e0-8642376f89ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.203321 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qb9m\" (UniqueName: \"kubernetes.io/projected/65a53faf-94ad-48f3-b8e0-8642376f89ee-kube-api-access-9qb9m\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a53faf-94ad-48f3-b8e0-8642376f89ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.203359 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/65a53faf-94ad-48f3-b8e0-8642376f89ee-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a53faf-94ad-48f3-b8e0-8642376f89ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.203378 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/65a53faf-94ad-48f3-b8e0-8642376f89ee-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a53faf-94ad-48f3-b8e0-8642376f89ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.203423 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/65a53faf-94ad-48f3-b8e0-8642376f89ee-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a53faf-94ad-48f3-b8e0-8642376f89ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.203440 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/65a53faf-94ad-48f3-b8e0-8642376f89ee-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a53faf-94ad-48f3-b8e0-8642376f89ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.244975 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.304738 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a53faf-94ad-48f3-b8e0-8642376f89ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.304783 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/65a53faf-94ad-48f3-b8e0-8642376f89ee-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a53faf-94ad-48f3-b8e0-8642376f89ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.304821 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qb9m\" (UniqueName: \"kubernetes.io/projected/65a53faf-94ad-48f3-b8e0-8642376f89ee-kube-api-access-9qb9m\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a53faf-94ad-48f3-b8e0-8642376f89ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.304870 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/65a53faf-94ad-48f3-b8e0-8642376f89ee-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a53faf-94ad-48f3-b8e0-8642376f89ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.304893 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/65a53faf-94ad-48f3-b8e0-8642376f89ee-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a53faf-94ad-48f3-b8e0-8642376f89ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.304973 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/65a53faf-94ad-48f3-b8e0-8642376f89ee-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a53faf-94ad-48f3-b8e0-8642376f89ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.304995 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/65a53faf-94ad-48f3-b8e0-8642376f89ee-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a53faf-94ad-48f3-b8e0-8642376f89ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.305016 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/65a53faf-94ad-48f3-b8e0-8642376f89ee-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a53faf-94ad-48f3-b8e0-8642376f89ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.305040 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/65a53faf-94ad-48f3-b8e0-8642376f89ee-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a53faf-94ad-48f3-b8e0-8642376f89ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.305060 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/65a53faf-94ad-48f3-b8e0-8642376f89ee-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a53faf-94ad-48f3-b8e0-8642376f89ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.305104 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/65a53faf-94ad-48f3-b8e0-8642376f89ee-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a53faf-94ad-48f3-b8e0-8642376f89ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.305539 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/65a53faf-94ad-48f3-b8e0-8642376f89ee-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a53faf-94ad-48f3-b8e0-8642376f89ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.305668 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a53faf-94ad-48f3-b8e0-8642376f89ee\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.306473 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/65a53faf-94ad-48f3-b8e0-8642376f89ee-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a53faf-94ad-48f3-b8e0-8642376f89ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.307007 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/65a53faf-94ad-48f3-b8e0-8642376f89ee-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a53faf-94ad-48f3-b8e0-8642376f89ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.307474 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/65a53faf-94ad-48f3-b8e0-8642376f89ee-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a53faf-94ad-48f3-b8e0-8642376f89ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.307650 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/65a53faf-94ad-48f3-b8e0-8642376f89ee-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a53faf-94ad-48f3-b8e0-8642376f89ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.310167 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/65a53faf-94ad-48f3-b8e0-8642376f89ee-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a53faf-94ad-48f3-b8e0-8642376f89ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.310283 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/65a53faf-94ad-48f3-b8e0-8642376f89ee-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a53faf-94ad-48f3-b8e0-8642376f89ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.312710 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/65a53faf-94ad-48f3-b8e0-8642376f89ee-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a53faf-94ad-48f3-b8e0-8642376f89ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.314008 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/65a53faf-94ad-48f3-b8e0-8642376f89ee-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a53faf-94ad-48f3-b8e0-8642376f89ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.320817 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qb9m\" (UniqueName: \"kubernetes.io/projected/65a53faf-94ad-48f3-b8e0-8642376f89ee-kube-api-access-9qb9m\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a53faf-94ad-48f3-b8e0-8642376f89ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.328210 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"65a53faf-94ad-48f3-b8e0-8642376f89ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.490539 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.686739 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 15 05:50:21 crc kubenswrapper[4747]: I1215 05:50:21.971300 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 15 05:50:22 crc kubenswrapper[4747]: I1215 05:50:22.207328 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9bece5e6-b345-4969-a563-81fb3706f8f1","Type":"ContainerStarted","Data":"e32387d719fe96325422479626ab5af7bf174ec9e1dc2028393d6024d87b28ca"} Dec 15 05:50:22 crc kubenswrapper[4747]: I1215 05:50:22.209389 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"65a53faf-94ad-48f3-b8e0-8642376f89ee","Type":"ContainerStarted","Data":"c67cfbd36f130e3d07c9e6271ca1bf0ab69034153e7c19f48e1ae91464e73b2e"} Dec 15 05:50:22 crc kubenswrapper[4747]: I1215 05:50:22.686512 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Dec 15 05:50:22 crc kubenswrapper[4747]: I1215 05:50:22.687806 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 15 05:50:22 crc kubenswrapper[4747]: I1215 05:50:22.689653 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-srj2d" Dec 15 05:50:22 crc kubenswrapper[4747]: I1215 05:50:22.690150 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 15 05:50:22 crc kubenswrapper[4747]: I1215 05:50:22.690515 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 15 05:50:22 crc kubenswrapper[4747]: I1215 05:50:22.690598 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 15 05:50:22 crc kubenswrapper[4747]: I1215 05:50:22.697511 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 15 05:50:22 crc kubenswrapper[4747]: I1215 05:50:22.700227 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 15 05:50:22 crc kubenswrapper[4747]: I1215 05:50:22.729257 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f242c5ef-84fc-4437-86a0-0175e8ea123b-kolla-config\") pod \"openstack-galera-0\" (UID: \"f242c5ef-84fc-4437-86a0-0175e8ea123b\") " pod="openstack/openstack-galera-0" Dec 15 05:50:22 crc kubenswrapper[4747]: I1215 05:50:22.729381 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fcb5\" (UniqueName: \"kubernetes.io/projected/f242c5ef-84fc-4437-86a0-0175e8ea123b-kube-api-access-5fcb5\") pod \"openstack-galera-0\" (UID: \"f242c5ef-84fc-4437-86a0-0175e8ea123b\") " pod="openstack/openstack-galera-0" Dec 15 05:50:22 crc kubenswrapper[4747]: I1215 05:50:22.729429 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f242c5ef-84fc-4437-86a0-0175e8ea123b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f242c5ef-84fc-4437-86a0-0175e8ea123b\") " pod="openstack/openstack-galera-0" Dec 15 05:50:22 crc kubenswrapper[4747]: I1215 05:50:22.729458 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f242c5ef-84fc-4437-86a0-0175e8ea123b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f242c5ef-84fc-4437-86a0-0175e8ea123b\") " pod="openstack/openstack-galera-0" Dec 15 05:50:22 crc kubenswrapper[4747]: I1215 05:50:22.729534 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f242c5ef-84fc-4437-86a0-0175e8ea123b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f242c5ef-84fc-4437-86a0-0175e8ea123b\") " pod="openstack/openstack-galera-0" Dec 15 05:50:22 crc kubenswrapper[4747]: I1215 05:50:22.729568 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f242c5ef-84fc-4437-86a0-0175e8ea123b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f242c5ef-84fc-4437-86a0-0175e8ea123b\") " pod="openstack/openstack-galera-0" Dec 15 05:50:22 crc kubenswrapper[4747]: I1215 05:50:22.729605 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"f242c5ef-84fc-4437-86a0-0175e8ea123b\") " pod="openstack/openstack-galera-0" Dec 15 05:50:22 crc kubenswrapper[4747]: I1215 05:50:22.729636 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f242c5ef-84fc-4437-86a0-0175e8ea123b-config-data-default\") pod \"openstack-galera-0\" (UID: \"f242c5ef-84fc-4437-86a0-0175e8ea123b\") " pod="openstack/openstack-galera-0" Dec 15 05:50:22 crc kubenswrapper[4747]: I1215 05:50:22.832375 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f242c5ef-84fc-4437-86a0-0175e8ea123b-kolla-config\") pod \"openstack-galera-0\" (UID: \"f242c5ef-84fc-4437-86a0-0175e8ea123b\") " pod="openstack/openstack-galera-0" Dec 15 05:50:22 crc kubenswrapper[4747]: I1215 05:50:22.832456 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fcb5\" (UniqueName: \"kubernetes.io/projected/f242c5ef-84fc-4437-86a0-0175e8ea123b-kube-api-access-5fcb5\") pod \"openstack-galera-0\" (UID: \"f242c5ef-84fc-4437-86a0-0175e8ea123b\") " pod="openstack/openstack-galera-0" Dec 15 05:50:22 crc kubenswrapper[4747]: I1215 05:50:22.832476 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f242c5ef-84fc-4437-86a0-0175e8ea123b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f242c5ef-84fc-4437-86a0-0175e8ea123b\") " pod="openstack/openstack-galera-0" Dec 15 05:50:22 crc kubenswrapper[4747]: I1215 05:50:22.832499 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f242c5ef-84fc-4437-86a0-0175e8ea123b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f242c5ef-84fc-4437-86a0-0175e8ea123b\") " pod="openstack/openstack-galera-0" Dec 15 05:50:22 crc kubenswrapper[4747]: I1215 05:50:22.832537 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f242c5ef-84fc-4437-86a0-0175e8ea123b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f242c5ef-84fc-4437-86a0-0175e8ea123b\") " pod="openstack/openstack-galera-0" Dec 15 05:50:22 crc kubenswrapper[4747]: I1215 05:50:22.832584 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f242c5ef-84fc-4437-86a0-0175e8ea123b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f242c5ef-84fc-4437-86a0-0175e8ea123b\") " pod="openstack/openstack-galera-0" Dec 15 05:50:22 crc kubenswrapper[4747]: I1215 05:50:22.832606 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"f242c5ef-84fc-4437-86a0-0175e8ea123b\") " pod="openstack/openstack-galera-0" Dec 15 05:50:22 crc kubenswrapper[4747]: I1215 05:50:22.832654 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f242c5ef-84fc-4437-86a0-0175e8ea123b-config-data-default\") pod \"openstack-galera-0\" (UID: \"f242c5ef-84fc-4437-86a0-0175e8ea123b\") " pod="openstack/openstack-galera-0" Dec 15 05:50:22 crc kubenswrapper[4747]: I1215 05:50:22.833409 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f242c5ef-84fc-4437-86a0-0175e8ea123b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f242c5ef-84fc-4437-86a0-0175e8ea123b\") " pod="openstack/openstack-galera-0" Dec 15 05:50:22 crc kubenswrapper[4747]: I1215 05:50:22.833577 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f242c5ef-84fc-4437-86a0-0175e8ea123b-kolla-config\") pod \"openstack-galera-0\" (UID: \"f242c5ef-84fc-4437-86a0-0175e8ea123b\") " pod="openstack/openstack-galera-0" Dec 15 05:50:22 crc kubenswrapper[4747]: I1215 05:50:22.833681 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f242c5ef-84fc-4437-86a0-0175e8ea123b-config-data-default\") pod \"openstack-galera-0\" (UID: \"f242c5ef-84fc-4437-86a0-0175e8ea123b\") " pod="openstack/openstack-galera-0" Dec 15 05:50:22 crc kubenswrapper[4747]: I1215 05:50:22.833941 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"f242c5ef-84fc-4437-86a0-0175e8ea123b\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Dec 15 05:50:22 crc kubenswrapper[4747]: I1215 05:50:22.835162 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f242c5ef-84fc-4437-86a0-0175e8ea123b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f242c5ef-84fc-4437-86a0-0175e8ea123b\") " pod="openstack/openstack-galera-0" Dec 15 05:50:22 crc kubenswrapper[4747]: I1215 05:50:22.841403 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f242c5ef-84fc-4437-86a0-0175e8ea123b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f242c5ef-84fc-4437-86a0-0175e8ea123b\") " pod="openstack/openstack-galera-0" Dec 15 05:50:22 crc kubenswrapper[4747]: I1215 05:50:22.857873 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fcb5\" (UniqueName: \"kubernetes.io/projected/f242c5ef-84fc-4437-86a0-0175e8ea123b-kube-api-access-5fcb5\") pod \"openstack-galera-0\" (UID: \"f242c5ef-84fc-4437-86a0-0175e8ea123b\") " pod="openstack/openstack-galera-0" Dec 15 05:50:22 crc kubenswrapper[4747]: I1215 05:50:22.863304 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f242c5ef-84fc-4437-86a0-0175e8ea123b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f242c5ef-84fc-4437-86a0-0175e8ea123b\") " pod="openstack/openstack-galera-0" Dec 15 05:50:22 crc kubenswrapper[4747]: I1215 05:50:22.871318 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"f242c5ef-84fc-4437-86a0-0175e8ea123b\") " pod="openstack/openstack-galera-0" Dec 15 05:50:23 crc kubenswrapper[4747]: I1215 05:50:23.016656 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 15 05:50:23 crc kubenswrapper[4747]: W1215 05:50:23.444081 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf242c5ef_84fc_4437_86a0_0175e8ea123b.slice/crio-98bccac97feb01d67df18742412dce2daf1905f5413817283afc1d0a778a76a4 WatchSource:0}: Error finding container 98bccac97feb01d67df18742412dce2daf1905f5413817283afc1d0a778a76a4: Status 404 returned error can't find the container with id 98bccac97feb01d67df18742412dce2daf1905f5413817283afc1d0a778a76a4 Dec 15 05:50:23 crc kubenswrapper[4747]: I1215 05:50:23.448148 4747 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 15 05:50:23 crc kubenswrapper[4747]: I1215 05:50:23.453193 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 15 05:50:23 crc kubenswrapper[4747]: I1215 05:50:23.938399 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 15 05:50:23 crc kubenswrapper[4747]: I1215 05:50:23.940417 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 15 05:50:23 crc kubenswrapper[4747]: I1215 05:50:23.942834 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Dec 15 05:50:23 crc kubenswrapper[4747]: I1215 05:50:23.942977 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Dec 15 05:50:23 crc kubenswrapper[4747]: I1215 05:50:23.944026 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-7grwv" Dec 15 05:50:23 crc kubenswrapper[4747]: I1215 05:50:23.944066 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Dec 15 05:50:23 crc kubenswrapper[4747]: I1215 05:50:23.947123 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 15 05:50:24 crc kubenswrapper[4747]: I1215 05:50:24.054783 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22da0dca-a59a-40f7-8dd2-95305eea5ee0-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"22da0dca-a59a-40f7-8dd2-95305eea5ee0\") " pod="openstack/openstack-cell1-galera-0" Dec 15 05:50:24 crc kubenswrapper[4747]: I1215 05:50:24.054877 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/22da0dca-a59a-40f7-8dd2-95305eea5ee0-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"22da0dca-a59a-40f7-8dd2-95305eea5ee0\") " pod="openstack/openstack-cell1-galera-0" Dec 15 05:50:24 crc kubenswrapper[4747]: I1215 05:50:24.054947 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/22da0dca-a59a-40f7-8dd2-95305eea5ee0-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"22da0dca-a59a-40f7-8dd2-95305eea5ee0\") " pod="openstack/openstack-cell1-galera-0" Dec 15 05:50:24 crc kubenswrapper[4747]: I1215 05:50:24.054977 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m6l5\" (UniqueName: \"kubernetes.io/projected/22da0dca-a59a-40f7-8dd2-95305eea5ee0-kube-api-access-9m6l5\") pod \"openstack-cell1-galera-0\" (UID: \"22da0dca-a59a-40f7-8dd2-95305eea5ee0\") " pod="openstack/openstack-cell1-galera-0" Dec 15 05:50:24 crc kubenswrapper[4747]: I1215 05:50:24.054995 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/22da0dca-a59a-40f7-8dd2-95305eea5ee0-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"22da0dca-a59a-40f7-8dd2-95305eea5ee0\") " pod="openstack/openstack-cell1-galera-0" Dec 15 05:50:24 crc kubenswrapper[4747]: I1215 05:50:24.055025 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/22da0dca-a59a-40f7-8dd2-95305eea5ee0-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"22da0dca-a59a-40f7-8dd2-95305eea5ee0\") " pod="openstack/openstack-cell1-galera-0" Dec 15 05:50:24 crc kubenswrapper[4747]: I1215 05:50:24.055115 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22da0dca-a59a-40f7-8dd2-95305eea5ee0-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"22da0dca-a59a-40f7-8dd2-95305eea5ee0\") " pod="openstack/openstack-cell1-galera-0" Dec 15 05:50:24 crc kubenswrapper[4747]: I1215 05:50:24.055169 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"22da0dca-a59a-40f7-8dd2-95305eea5ee0\") " pod="openstack/openstack-cell1-galera-0" Dec 15 05:50:24 crc kubenswrapper[4747]: I1215 05:50:24.157021 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22da0dca-a59a-40f7-8dd2-95305eea5ee0-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"22da0dca-a59a-40f7-8dd2-95305eea5ee0\") " pod="openstack/openstack-cell1-galera-0" Dec 15 05:50:24 crc kubenswrapper[4747]: I1215 05:50:24.157068 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"22da0dca-a59a-40f7-8dd2-95305eea5ee0\") " pod="openstack/openstack-cell1-galera-0" Dec 15 05:50:24 crc kubenswrapper[4747]: I1215 05:50:24.157106 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22da0dca-a59a-40f7-8dd2-95305eea5ee0-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"22da0dca-a59a-40f7-8dd2-95305eea5ee0\") " pod="openstack/openstack-cell1-galera-0" Dec 15 05:50:24 crc kubenswrapper[4747]: I1215 05:50:24.157149 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/22da0dca-a59a-40f7-8dd2-95305eea5ee0-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"22da0dca-a59a-40f7-8dd2-95305eea5ee0\") " pod="openstack/openstack-cell1-galera-0" Dec 15 05:50:24 crc kubenswrapper[4747]: I1215 05:50:24.157186 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/22da0dca-a59a-40f7-8dd2-95305eea5ee0-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"22da0dca-a59a-40f7-8dd2-95305eea5ee0\") " pod="openstack/openstack-cell1-galera-0" Dec 15 05:50:24 crc kubenswrapper[4747]: I1215 05:50:24.157212 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m6l5\" (UniqueName: \"kubernetes.io/projected/22da0dca-a59a-40f7-8dd2-95305eea5ee0-kube-api-access-9m6l5\") pod \"openstack-cell1-galera-0\" (UID: \"22da0dca-a59a-40f7-8dd2-95305eea5ee0\") " pod="openstack/openstack-cell1-galera-0" Dec 15 05:50:24 crc kubenswrapper[4747]: I1215 05:50:24.157228 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/22da0dca-a59a-40f7-8dd2-95305eea5ee0-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"22da0dca-a59a-40f7-8dd2-95305eea5ee0\") " pod="openstack/openstack-cell1-galera-0" Dec 15 05:50:24 crc kubenswrapper[4747]: I1215 05:50:24.157260 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/22da0dca-a59a-40f7-8dd2-95305eea5ee0-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"22da0dca-a59a-40f7-8dd2-95305eea5ee0\") " pod="openstack/openstack-cell1-galera-0" Dec 15 05:50:24 crc kubenswrapper[4747]: I1215 05:50:24.157423 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"22da0dca-a59a-40f7-8dd2-95305eea5ee0\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-cell1-galera-0" Dec 15 05:50:24 crc kubenswrapper[4747]: I1215 05:50:24.158163 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/22da0dca-a59a-40f7-8dd2-95305eea5ee0-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"22da0dca-a59a-40f7-8dd2-95305eea5ee0\") " pod="openstack/openstack-cell1-galera-0" Dec 15 05:50:24 crc kubenswrapper[4747]: I1215 05:50:24.158484 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/22da0dca-a59a-40f7-8dd2-95305eea5ee0-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"22da0dca-a59a-40f7-8dd2-95305eea5ee0\") " pod="openstack/openstack-cell1-galera-0" Dec 15 05:50:24 crc kubenswrapper[4747]: I1215 05:50:24.158677 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22da0dca-a59a-40f7-8dd2-95305eea5ee0-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"22da0dca-a59a-40f7-8dd2-95305eea5ee0\") " pod="openstack/openstack-cell1-galera-0" Dec 15 05:50:24 crc kubenswrapper[4747]: I1215 05:50:24.159025 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/22da0dca-a59a-40f7-8dd2-95305eea5ee0-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"22da0dca-a59a-40f7-8dd2-95305eea5ee0\") " pod="openstack/openstack-cell1-galera-0" Dec 15 05:50:24 crc kubenswrapper[4747]: I1215 05:50:24.164161 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/22da0dca-a59a-40f7-8dd2-95305eea5ee0-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"22da0dca-a59a-40f7-8dd2-95305eea5ee0\") " pod="openstack/openstack-cell1-galera-0" Dec 15 05:50:24 crc kubenswrapper[4747]: I1215 05:50:24.167324 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22da0dca-a59a-40f7-8dd2-95305eea5ee0-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"22da0dca-a59a-40f7-8dd2-95305eea5ee0\") " pod="openstack/openstack-cell1-galera-0" Dec 15 05:50:24 crc kubenswrapper[4747]: I1215 05:50:24.171370 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m6l5\" (UniqueName: \"kubernetes.io/projected/22da0dca-a59a-40f7-8dd2-95305eea5ee0-kube-api-access-9m6l5\") pod \"openstack-cell1-galera-0\" (UID: \"22da0dca-a59a-40f7-8dd2-95305eea5ee0\") " pod="openstack/openstack-cell1-galera-0" Dec 15 05:50:24 crc kubenswrapper[4747]: I1215 05:50:24.176592 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"22da0dca-a59a-40f7-8dd2-95305eea5ee0\") " pod="openstack/openstack-cell1-galera-0" Dec 15 05:50:24 crc kubenswrapper[4747]: I1215 05:50:24.226683 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f242c5ef-84fc-4437-86a0-0175e8ea123b","Type":"ContainerStarted","Data":"98bccac97feb01d67df18742412dce2daf1905f5413817283afc1d0a778a76a4"} Dec 15 05:50:24 crc kubenswrapper[4747]: I1215 05:50:24.263653 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 15 05:50:24 crc kubenswrapper[4747]: I1215 05:50:24.461046 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Dec 15 05:50:24 crc kubenswrapper[4747]: I1215 05:50:24.462352 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 15 05:50:24 crc kubenswrapper[4747]: I1215 05:50:24.471232 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 15 05:50:24 crc kubenswrapper[4747]: I1215 05:50:24.474548 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-l5gwl" Dec 15 05:50:24 crc kubenswrapper[4747]: I1215 05:50:24.474790 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Dec 15 05:50:24 crc kubenswrapper[4747]: I1215 05:50:24.474915 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Dec 15 05:50:24 crc kubenswrapper[4747]: I1215 05:50:24.671873 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tcm8\" (UniqueName: \"kubernetes.io/projected/8828a0c4-9d91-45ba-a6f7-3bd720a9596b-kube-api-access-9tcm8\") pod \"memcached-0\" (UID: \"8828a0c4-9d91-45ba-a6f7-3bd720a9596b\") " pod="openstack/memcached-0" Dec 15 05:50:24 crc kubenswrapper[4747]: I1215 05:50:24.672278 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8828a0c4-9d91-45ba-a6f7-3bd720a9596b-kolla-config\") pod \"memcached-0\" (UID: \"8828a0c4-9d91-45ba-a6f7-3bd720a9596b\") " pod="openstack/memcached-0" Dec 15 05:50:24 crc kubenswrapper[4747]: I1215 05:50:24.672319 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8828a0c4-9d91-45ba-a6f7-3bd720a9596b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"8828a0c4-9d91-45ba-a6f7-3bd720a9596b\") " pod="openstack/memcached-0" Dec 15 05:50:24 crc kubenswrapper[4747]: I1215 05:50:24.672344 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/8828a0c4-9d91-45ba-a6f7-3bd720a9596b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"8828a0c4-9d91-45ba-a6f7-3bd720a9596b\") " pod="openstack/memcached-0" Dec 15 05:50:24 crc kubenswrapper[4747]: I1215 05:50:24.672454 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8828a0c4-9d91-45ba-a6f7-3bd720a9596b-config-data\") pod \"memcached-0\" (UID: \"8828a0c4-9d91-45ba-a6f7-3bd720a9596b\") " pod="openstack/memcached-0" Dec 15 05:50:24 crc kubenswrapper[4747]: I1215 05:50:24.772406 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 15 05:50:24 crc kubenswrapper[4747]: I1215 05:50:24.775825 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8828a0c4-9d91-45ba-a6f7-3bd720a9596b-config-data\") pod \"memcached-0\" (UID: \"8828a0c4-9d91-45ba-a6f7-3bd720a9596b\") " pod="openstack/memcached-0" Dec 15 05:50:24 crc kubenswrapper[4747]: I1215 05:50:24.776056 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tcm8\" (UniqueName: \"kubernetes.io/projected/8828a0c4-9d91-45ba-a6f7-3bd720a9596b-kube-api-access-9tcm8\") pod \"memcached-0\" (UID: \"8828a0c4-9d91-45ba-a6f7-3bd720a9596b\") " pod="openstack/memcached-0" Dec 15 05:50:24 crc kubenswrapper[4747]: I1215 05:50:24.776161 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8828a0c4-9d91-45ba-a6f7-3bd720a9596b-kolla-config\") pod \"memcached-0\" (UID: \"8828a0c4-9d91-45ba-a6f7-3bd720a9596b\") " pod="openstack/memcached-0" Dec 15 05:50:24 crc kubenswrapper[4747]: I1215 05:50:24.776196 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8828a0c4-9d91-45ba-a6f7-3bd720a9596b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"8828a0c4-9d91-45ba-a6f7-3bd720a9596b\") " pod="openstack/memcached-0" Dec 15 05:50:24 crc kubenswrapper[4747]: I1215 05:50:24.776249 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/8828a0c4-9d91-45ba-a6f7-3bd720a9596b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"8828a0c4-9d91-45ba-a6f7-3bd720a9596b\") " pod="openstack/memcached-0" Dec 15 05:50:24 crc kubenswrapper[4747]: I1215 05:50:24.776907 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8828a0c4-9d91-45ba-a6f7-3bd720a9596b-kolla-config\") pod \"memcached-0\" (UID: \"8828a0c4-9d91-45ba-a6f7-3bd720a9596b\") " pod="openstack/memcached-0" Dec 15 05:50:24 crc kubenswrapper[4747]: I1215 05:50:24.780536 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8828a0c4-9d91-45ba-a6f7-3bd720a9596b-config-data\") pod \"memcached-0\" (UID: \"8828a0c4-9d91-45ba-a6f7-3bd720a9596b\") " pod="openstack/memcached-0" Dec 15 05:50:24 crc kubenswrapper[4747]: I1215 05:50:24.802247 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8828a0c4-9d91-45ba-a6f7-3bd720a9596b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"8828a0c4-9d91-45ba-a6f7-3bd720a9596b\") " pod="openstack/memcached-0" Dec 15 05:50:24 crc kubenswrapper[4747]: I1215 05:50:24.804621 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/8828a0c4-9d91-45ba-a6f7-3bd720a9596b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"8828a0c4-9d91-45ba-a6f7-3bd720a9596b\") " pod="openstack/memcached-0" Dec 15 05:50:24 crc kubenswrapper[4747]: I1215 05:50:24.807292 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tcm8\" (UniqueName: \"kubernetes.io/projected/8828a0c4-9d91-45ba-a6f7-3bd720a9596b-kube-api-access-9tcm8\") pod \"memcached-0\" (UID: \"8828a0c4-9d91-45ba-a6f7-3bd720a9596b\") " pod="openstack/memcached-0" Dec 15 05:50:25 crc kubenswrapper[4747]: I1215 05:50:25.091770 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 15 05:50:25 crc kubenswrapper[4747]: I1215 05:50:25.245214 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"22da0dca-a59a-40f7-8dd2-95305eea5ee0","Type":"ContainerStarted","Data":"70717dc4b7de91456df8a4eb1297ac85ff824f054c6dc913b0f499897e2b873a"} Dec 15 05:50:25 crc kubenswrapper[4747]: I1215 05:50:25.488954 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 15 05:50:25 crc kubenswrapper[4747]: W1215 05:50:25.510548 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8828a0c4_9d91_45ba_a6f7_3bd720a9596b.slice/crio-d948ecd5243f57e65b2b4e1a91ff197c5d8c2d24764d9b48ed6a4f9ef8573efc WatchSource:0}: Error finding container d948ecd5243f57e65b2b4e1a91ff197c5d8c2d24764d9b48ed6a4f9ef8573efc: Status 404 returned error can't find the container with id d948ecd5243f57e65b2b4e1a91ff197c5d8c2d24764d9b48ed6a4f9ef8573efc Dec 15 05:50:26 crc kubenswrapper[4747]: I1215 05:50:26.209451 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 15 05:50:26 crc kubenswrapper[4747]: I1215 05:50:26.211001 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 15 05:50:26 crc kubenswrapper[4747]: I1215 05:50:26.213129 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-s6pw6" Dec 15 05:50:26 crc kubenswrapper[4747]: I1215 05:50:26.231779 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 15 05:50:26 crc kubenswrapper[4747]: I1215 05:50:26.261471 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"8828a0c4-9d91-45ba-a6f7-3bd720a9596b","Type":"ContainerStarted","Data":"d948ecd5243f57e65b2b4e1a91ff197c5d8c2d24764d9b48ed6a4f9ef8573efc"} Dec 15 05:50:26 crc kubenswrapper[4747]: I1215 05:50:26.312475 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv4nb\" (UniqueName: \"kubernetes.io/projected/9180543c-2a85-4639-82ac-7180f7a1274c-kube-api-access-rv4nb\") pod \"kube-state-metrics-0\" (UID: \"9180543c-2a85-4639-82ac-7180f7a1274c\") " pod="openstack/kube-state-metrics-0" Dec 15 05:50:26 crc kubenswrapper[4747]: I1215 05:50:26.414626 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv4nb\" (UniqueName: \"kubernetes.io/projected/9180543c-2a85-4639-82ac-7180f7a1274c-kube-api-access-rv4nb\") pod \"kube-state-metrics-0\" (UID: \"9180543c-2a85-4639-82ac-7180f7a1274c\") " pod="openstack/kube-state-metrics-0" Dec 15 05:50:26 crc kubenswrapper[4747]: I1215 05:50:26.436512 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv4nb\" (UniqueName: \"kubernetes.io/projected/9180543c-2a85-4639-82ac-7180f7a1274c-kube-api-access-rv4nb\") pod \"kube-state-metrics-0\" (UID: \"9180543c-2a85-4639-82ac-7180f7a1274c\") " pod="openstack/kube-state-metrics-0" Dec 15 05:50:26 crc kubenswrapper[4747]: I1215 05:50:26.531567 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 15 05:50:26 crc kubenswrapper[4747]: I1215 05:50:26.978688 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 15 05:50:26 crc kubenswrapper[4747]: W1215 05:50:26.987570 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9180543c_2a85_4639_82ac_7180f7a1274c.slice/crio-b90a9f8864e43cfc819d42c4d34e7f93f40cbf2f5b999ec9feca0b5284da0d1a WatchSource:0}: Error finding container b90a9f8864e43cfc819d42c4d34e7f93f40cbf2f5b999ec9feca0b5284da0d1a: Status 404 returned error can't find the container with id b90a9f8864e43cfc819d42c4d34e7f93f40cbf2f5b999ec9feca0b5284da0d1a Dec 15 05:50:27 crc kubenswrapper[4747]: I1215 05:50:27.270776 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9180543c-2a85-4639-82ac-7180f7a1274c","Type":"ContainerStarted","Data":"b90a9f8864e43cfc819d42c4d34e7f93f40cbf2f5b999ec9feca0b5284da0d1a"} Dec 15 05:50:28 crc kubenswrapper[4747]: I1215 05:50:28.865733 4747 patch_prober.go:28] interesting pod/machine-config-daemon-nldtn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 15 05:50:28 crc kubenswrapper[4747]: I1215 05:50:28.865797 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 15 05:50:29 crc kubenswrapper[4747]: I1215 05:50:29.290376 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9180543c-2a85-4639-82ac-7180f7a1274c","Type":"ContainerStarted","Data":"d7ce6367fcfd9a641aaca2a50cf1a049d3e7f84c489a9d2eaf31d66c0384e4eb"} Dec 15 05:50:29 crc kubenswrapper[4747]: I1215 05:50:29.290550 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 15 05:50:29 crc kubenswrapper[4747]: I1215 05:50:29.318482 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.270770782 podStartE2EDuration="3.318422443s" podCreationTimestamp="2025-12-15 05:50:26 +0000 UTC" firstStartedPulling="2025-12-15 05:50:26.992414467 +0000 UTC m=+790.688926385" lastFinishedPulling="2025-12-15 05:50:29.040066129 +0000 UTC m=+792.736578046" observedRunningTime="2025-12-15 05:50:29.304363896 +0000 UTC m=+793.000875814" watchObservedRunningTime="2025-12-15 05:50:29.318422443 +0000 UTC m=+793.014934361" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.152529 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-b65n4"] Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.154087 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-b65n4" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.156513 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-f5lrf" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.156705 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.156842 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.172943 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-b65n4"] Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.179172 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-jmz8h"] Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.180624 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-jmz8h" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.187215 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-jmz8h"] Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.285344 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/becaa3b6-8cd5-4e55-9a81-0a21fec0a70b-var-log-ovn\") pod \"ovn-controller-b65n4\" (UID: \"becaa3b6-8cd5-4e55-9a81-0a21fec0a70b\") " pod="openstack/ovn-controller-b65n4" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.285409 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/becaa3b6-8cd5-4e55-9a81-0a21fec0a70b-ovn-controller-tls-certs\") pod \"ovn-controller-b65n4\" (UID: \"becaa3b6-8cd5-4e55-9a81-0a21fec0a70b\") " pod="openstack/ovn-controller-b65n4" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.285559 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/becaa3b6-8cd5-4e55-9a81-0a21fec0a70b-scripts\") pod \"ovn-controller-b65n4\" (UID: \"becaa3b6-8cd5-4e55-9a81-0a21fec0a70b\") " pod="openstack/ovn-controller-b65n4" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.285664 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/becaa3b6-8cd5-4e55-9a81-0a21fec0a70b-combined-ca-bundle\") pod \"ovn-controller-b65n4\" (UID: \"becaa3b6-8cd5-4e55-9a81-0a21fec0a70b\") " pod="openstack/ovn-controller-b65n4" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.285691 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn58h\" (UniqueName: \"kubernetes.io/projected/becaa3b6-8cd5-4e55-9a81-0a21fec0a70b-kube-api-access-zn58h\") pod \"ovn-controller-b65n4\" (UID: \"becaa3b6-8cd5-4e55-9a81-0a21fec0a70b\") " pod="openstack/ovn-controller-b65n4" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.285791 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh48k\" (UniqueName: \"kubernetes.io/projected/dca41dd5-5747-42a1-8703-30ae549342b7-kube-api-access-zh48k\") pod \"ovn-controller-ovs-jmz8h\" (UID: \"dca41dd5-5747-42a1-8703-30ae549342b7\") " pod="openstack/ovn-controller-ovs-jmz8h" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.285833 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/becaa3b6-8cd5-4e55-9a81-0a21fec0a70b-var-run-ovn\") pod \"ovn-controller-b65n4\" (UID: \"becaa3b6-8cd5-4e55-9a81-0a21fec0a70b\") " pod="openstack/ovn-controller-b65n4" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.285863 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/dca41dd5-5747-42a1-8703-30ae549342b7-var-log\") pod \"ovn-controller-ovs-jmz8h\" (UID: \"dca41dd5-5747-42a1-8703-30ae549342b7\") " pod="openstack/ovn-controller-ovs-jmz8h" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.285906 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/dca41dd5-5747-42a1-8703-30ae549342b7-var-lib\") pod \"ovn-controller-ovs-jmz8h\" (UID: \"dca41dd5-5747-42a1-8703-30ae549342b7\") " pod="openstack/ovn-controller-ovs-jmz8h" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.285944 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dca41dd5-5747-42a1-8703-30ae549342b7-scripts\") pod \"ovn-controller-ovs-jmz8h\" (UID: \"dca41dd5-5747-42a1-8703-30ae549342b7\") " pod="openstack/ovn-controller-ovs-jmz8h" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.285972 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/dca41dd5-5747-42a1-8703-30ae549342b7-etc-ovs\") pod \"ovn-controller-ovs-jmz8h\" (UID: \"dca41dd5-5747-42a1-8703-30ae549342b7\") " pod="openstack/ovn-controller-ovs-jmz8h" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.285987 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/becaa3b6-8cd5-4e55-9a81-0a21fec0a70b-var-run\") pod \"ovn-controller-b65n4\" (UID: \"becaa3b6-8cd5-4e55-9a81-0a21fec0a70b\") " pod="openstack/ovn-controller-b65n4" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.286005 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/dca41dd5-5747-42a1-8703-30ae549342b7-var-run\") pod \"ovn-controller-ovs-jmz8h\" (UID: \"dca41dd5-5747-42a1-8703-30ae549342b7\") " pod="openstack/ovn-controller-ovs-jmz8h" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.388137 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/dca41dd5-5747-42a1-8703-30ae549342b7-var-log\") pod \"ovn-controller-ovs-jmz8h\" (UID: \"dca41dd5-5747-42a1-8703-30ae549342b7\") " pod="openstack/ovn-controller-ovs-jmz8h" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.388181 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/dca41dd5-5747-42a1-8703-30ae549342b7-var-lib\") pod \"ovn-controller-ovs-jmz8h\" (UID: \"dca41dd5-5747-42a1-8703-30ae549342b7\") " pod="openstack/ovn-controller-ovs-jmz8h" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.388208 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dca41dd5-5747-42a1-8703-30ae549342b7-scripts\") pod \"ovn-controller-ovs-jmz8h\" (UID: \"dca41dd5-5747-42a1-8703-30ae549342b7\") " pod="openstack/ovn-controller-ovs-jmz8h" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.388244 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/dca41dd5-5747-42a1-8703-30ae549342b7-etc-ovs\") pod \"ovn-controller-ovs-jmz8h\" (UID: \"dca41dd5-5747-42a1-8703-30ae549342b7\") " pod="openstack/ovn-controller-ovs-jmz8h" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.388267 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/becaa3b6-8cd5-4e55-9a81-0a21fec0a70b-var-run\") pod \"ovn-controller-b65n4\" (UID: \"becaa3b6-8cd5-4e55-9a81-0a21fec0a70b\") " pod="openstack/ovn-controller-b65n4" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.388287 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/dca41dd5-5747-42a1-8703-30ae549342b7-var-run\") pod \"ovn-controller-ovs-jmz8h\" (UID: \"dca41dd5-5747-42a1-8703-30ae549342b7\") " pod="openstack/ovn-controller-ovs-jmz8h" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.388358 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/becaa3b6-8cd5-4e55-9a81-0a21fec0a70b-var-log-ovn\") pod \"ovn-controller-b65n4\" (UID: \"becaa3b6-8cd5-4e55-9a81-0a21fec0a70b\") " pod="openstack/ovn-controller-b65n4" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.388403 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/becaa3b6-8cd5-4e55-9a81-0a21fec0a70b-ovn-controller-tls-certs\") pod \"ovn-controller-b65n4\" (UID: \"becaa3b6-8cd5-4e55-9a81-0a21fec0a70b\") " pod="openstack/ovn-controller-b65n4" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.388450 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/becaa3b6-8cd5-4e55-9a81-0a21fec0a70b-scripts\") pod \"ovn-controller-b65n4\" (UID: \"becaa3b6-8cd5-4e55-9a81-0a21fec0a70b\") " pod="openstack/ovn-controller-b65n4" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.388484 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/becaa3b6-8cd5-4e55-9a81-0a21fec0a70b-combined-ca-bundle\") pod \"ovn-controller-b65n4\" (UID: \"becaa3b6-8cd5-4e55-9a81-0a21fec0a70b\") " pod="openstack/ovn-controller-b65n4" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.388505 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn58h\" (UniqueName: \"kubernetes.io/projected/becaa3b6-8cd5-4e55-9a81-0a21fec0a70b-kube-api-access-zn58h\") pod \"ovn-controller-b65n4\" (UID: \"becaa3b6-8cd5-4e55-9a81-0a21fec0a70b\") " pod="openstack/ovn-controller-b65n4" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.388530 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh48k\" (UniqueName: \"kubernetes.io/projected/dca41dd5-5747-42a1-8703-30ae549342b7-kube-api-access-zh48k\") pod \"ovn-controller-ovs-jmz8h\" (UID: \"dca41dd5-5747-42a1-8703-30ae549342b7\") " pod="openstack/ovn-controller-ovs-jmz8h" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.388552 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/becaa3b6-8cd5-4e55-9a81-0a21fec0a70b-var-run-ovn\") pod \"ovn-controller-b65n4\" (UID: \"becaa3b6-8cd5-4e55-9a81-0a21fec0a70b\") " pod="openstack/ovn-controller-b65n4" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.388941 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/becaa3b6-8cd5-4e55-9a81-0a21fec0a70b-var-log-ovn\") pod \"ovn-controller-b65n4\" (UID: \"becaa3b6-8cd5-4e55-9a81-0a21fec0a70b\") " pod="openstack/ovn-controller-b65n4" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.389029 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/dca41dd5-5747-42a1-8703-30ae549342b7-var-log\") pod \"ovn-controller-ovs-jmz8h\" (UID: \"dca41dd5-5747-42a1-8703-30ae549342b7\") " pod="openstack/ovn-controller-ovs-jmz8h" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.389108 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/becaa3b6-8cd5-4e55-9a81-0a21fec0a70b-var-run\") pod \"ovn-controller-b65n4\" (UID: \"becaa3b6-8cd5-4e55-9a81-0a21fec0a70b\") " pod="openstack/ovn-controller-b65n4" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.389132 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/dca41dd5-5747-42a1-8703-30ae549342b7-var-run\") pod \"ovn-controller-ovs-jmz8h\" (UID: \"dca41dd5-5747-42a1-8703-30ae549342b7\") " pod="openstack/ovn-controller-ovs-jmz8h" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.389147 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/dca41dd5-5747-42a1-8703-30ae549342b7-etc-ovs\") pod \"ovn-controller-ovs-jmz8h\" (UID: \"dca41dd5-5747-42a1-8703-30ae549342b7\") " pod="openstack/ovn-controller-ovs-jmz8h" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.390050 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/dca41dd5-5747-42a1-8703-30ae549342b7-var-lib\") pod \"ovn-controller-ovs-jmz8h\" (UID: \"dca41dd5-5747-42a1-8703-30ae549342b7\") " pod="openstack/ovn-controller-ovs-jmz8h" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.390129 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/becaa3b6-8cd5-4e55-9a81-0a21fec0a70b-var-run-ovn\") pod \"ovn-controller-b65n4\" (UID: \"becaa3b6-8cd5-4e55-9a81-0a21fec0a70b\") " pod="openstack/ovn-controller-b65n4" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.392866 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dca41dd5-5747-42a1-8703-30ae549342b7-scripts\") pod \"ovn-controller-ovs-jmz8h\" (UID: \"dca41dd5-5747-42a1-8703-30ae549342b7\") " pod="openstack/ovn-controller-ovs-jmz8h" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.396097 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/becaa3b6-8cd5-4e55-9a81-0a21fec0a70b-combined-ca-bundle\") pod \"ovn-controller-b65n4\" (UID: \"becaa3b6-8cd5-4e55-9a81-0a21fec0a70b\") " pod="openstack/ovn-controller-b65n4" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.396550 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/becaa3b6-8cd5-4e55-9a81-0a21fec0a70b-ovn-controller-tls-certs\") pod \"ovn-controller-b65n4\" (UID: \"becaa3b6-8cd5-4e55-9a81-0a21fec0a70b\") " pod="openstack/ovn-controller-b65n4" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.402060 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/becaa3b6-8cd5-4e55-9a81-0a21fec0a70b-scripts\") pod \"ovn-controller-b65n4\" (UID: \"becaa3b6-8cd5-4e55-9a81-0a21fec0a70b\") " pod="openstack/ovn-controller-b65n4" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.404267 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh48k\" (UniqueName: \"kubernetes.io/projected/dca41dd5-5747-42a1-8703-30ae549342b7-kube-api-access-zh48k\") pod \"ovn-controller-ovs-jmz8h\" (UID: \"dca41dd5-5747-42a1-8703-30ae549342b7\") " pod="openstack/ovn-controller-ovs-jmz8h" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.406022 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn58h\" (UniqueName: \"kubernetes.io/projected/becaa3b6-8cd5-4e55-9a81-0a21fec0a70b-kube-api-access-zn58h\") pod \"ovn-controller-b65n4\" (UID: \"becaa3b6-8cd5-4e55-9a81-0a21fec0a70b\") " pod="openstack/ovn-controller-b65n4" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.484727 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.486329 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.489580 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.489734 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.489898 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.489958 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.490145 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-7nb7d" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.493718 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.494331 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-b65n4" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.506615 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-jmz8h" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.592334 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg2z8\" (UniqueName: \"kubernetes.io/projected/d84a0b88-fbfb-4d28-89e0-5a64b4a1430f-kube-api-access-xg2z8\") pod \"ovsdbserver-nb-0\" (UID: \"d84a0b88-fbfb-4d28-89e0-5a64b4a1430f\") " pod="openstack/ovsdbserver-nb-0" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.592751 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"d84a0b88-fbfb-4d28-89e0-5a64b4a1430f\") " pod="openstack/ovsdbserver-nb-0" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.592776 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d84a0b88-fbfb-4d28-89e0-5a64b4a1430f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"d84a0b88-fbfb-4d28-89e0-5a64b4a1430f\") " pod="openstack/ovsdbserver-nb-0" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.592804 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d84a0b88-fbfb-4d28-89e0-5a64b4a1430f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"d84a0b88-fbfb-4d28-89e0-5a64b4a1430f\") " pod="openstack/ovsdbserver-nb-0" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.592820 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d84a0b88-fbfb-4d28-89e0-5a64b4a1430f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d84a0b88-fbfb-4d28-89e0-5a64b4a1430f\") " pod="openstack/ovsdbserver-nb-0" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.592882 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d84a0b88-fbfb-4d28-89e0-5a64b4a1430f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d84a0b88-fbfb-4d28-89e0-5a64b4a1430f\") " pod="openstack/ovsdbserver-nb-0" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.592966 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d84a0b88-fbfb-4d28-89e0-5a64b4a1430f-config\") pod \"ovsdbserver-nb-0\" (UID: \"d84a0b88-fbfb-4d28-89e0-5a64b4a1430f\") " pod="openstack/ovsdbserver-nb-0" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.592993 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d84a0b88-fbfb-4d28-89e0-5a64b4a1430f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"d84a0b88-fbfb-4d28-89e0-5a64b4a1430f\") " pod="openstack/ovsdbserver-nb-0" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.695517 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d84a0b88-fbfb-4d28-89e0-5a64b4a1430f-config\") pod \"ovsdbserver-nb-0\" (UID: \"d84a0b88-fbfb-4d28-89e0-5a64b4a1430f\") " pod="openstack/ovsdbserver-nb-0" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.695575 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d84a0b88-fbfb-4d28-89e0-5a64b4a1430f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"d84a0b88-fbfb-4d28-89e0-5a64b4a1430f\") " pod="openstack/ovsdbserver-nb-0" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.695636 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg2z8\" (UniqueName: \"kubernetes.io/projected/d84a0b88-fbfb-4d28-89e0-5a64b4a1430f-kube-api-access-xg2z8\") pod \"ovsdbserver-nb-0\" (UID: \"d84a0b88-fbfb-4d28-89e0-5a64b4a1430f\") " pod="openstack/ovsdbserver-nb-0" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.695777 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"d84a0b88-fbfb-4d28-89e0-5a64b4a1430f\") " pod="openstack/ovsdbserver-nb-0" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.695797 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d84a0b88-fbfb-4d28-89e0-5a64b4a1430f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"d84a0b88-fbfb-4d28-89e0-5a64b4a1430f\") " pod="openstack/ovsdbserver-nb-0" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.695823 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d84a0b88-fbfb-4d28-89e0-5a64b4a1430f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"d84a0b88-fbfb-4d28-89e0-5a64b4a1430f\") " pod="openstack/ovsdbserver-nb-0" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.696154 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"d84a0b88-fbfb-4d28-89e0-5a64b4a1430f\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-nb-0" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.696377 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d84a0b88-fbfb-4d28-89e0-5a64b4a1430f-config\") pod \"ovsdbserver-nb-0\" (UID: \"d84a0b88-fbfb-4d28-89e0-5a64b4a1430f\") " pod="openstack/ovsdbserver-nb-0" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.696679 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d84a0b88-fbfb-4d28-89e0-5a64b4a1430f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"d84a0b88-fbfb-4d28-89e0-5a64b4a1430f\") " pod="openstack/ovsdbserver-nb-0" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.695838 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d84a0b88-fbfb-4d28-89e0-5a64b4a1430f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d84a0b88-fbfb-4d28-89e0-5a64b4a1430f\") " pod="openstack/ovsdbserver-nb-0" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.696833 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d84a0b88-fbfb-4d28-89e0-5a64b4a1430f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d84a0b88-fbfb-4d28-89e0-5a64b4a1430f\") " pod="openstack/ovsdbserver-nb-0" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.696832 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d84a0b88-fbfb-4d28-89e0-5a64b4a1430f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"d84a0b88-fbfb-4d28-89e0-5a64b4a1430f\") " pod="openstack/ovsdbserver-nb-0" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.701754 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d84a0b88-fbfb-4d28-89e0-5a64b4a1430f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d84a0b88-fbfb-4d28-89e0-5a64b4a1430f\") " pod="openstack/ovsdbserver-nb-0" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.701806 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d84a0b88-fbfb-4d28-89e0-5a64b4a1430f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"d84a0b88-fbfb-4d28-89e0-5a64b4a1430f\") " pod="openstack/ovsdbserver-nb-0" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.703518 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d84a0b88-fbfb-4d28-89e0-5a64b4a1430f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d84a0b88-fbfb-4d28-89e0-5a64b4a1430f\") " pod="openstack/ovsdbserver-nb-0" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.711253 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg2z8\" (UniqueName: \"kubernetes.io/projected/d84a0b88-fbfb-4d28-89e0-5a64b4a1430f-kube-api-access-xg2z8\") pod \"ovsdbserver-nb-0\" (UID: \"d84a0b88-fbfb-4d28-89e0-5a64b4a1430f\") " pod="openstack/ovsdbserver-nb-0" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.720277 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"d84a0b88-fbfb-4d28-89e0-5a64b4a1430f\") " pod="openstack/ovsdbserver-nb-0" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.808742 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 15 05:50:30 crc kubenswrapper[4747]: I1215 05:50:30.973958 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-b65n4"] Dec 15 05:50:30 crc kubenswrapper[4747]: W1215 05:50:30.981950 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbecaa3b6_8cd5_4e55_9a81_0a21fec0a70b.slice/crio-99387855a96919fad38cee70685566266740277c0bd8443bc2800c771ebeddaf WatchSource:0}: Error finding container 99387855a96919fad38cee70685566266740277c0bd8443bc2800c771ebeddaf: Status 404 returned error can't find the container with id 99387855a96919fad38cee70685566266740277c0bd8443bc2800c771ebeddaf Dec 15 05:50:31 crc kubenswrapper[4747]: I1215 05:50:31.155824 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-jmz8h"] Dec 15 05:50:31 crc kubenswrapper[4747]: I1215 05:50:31.376654 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 15 05:50:31 crc kubenswrapper[4747]: I1215 05:50:31.392177 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-b65n4" event={"ID":"becaa3b6-8cd5-4e55-9a81-0a21fec0a70b","Type":"ContainerStarted","Data":"99387855a96919fad38cee70685566266740277c0bd8443bc2800c771ebeddaf"} Dec 15 05:50:31 crc kubenswrapper[4747]: I1215 05:50:31.409081 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jmz8h" event={"ID":"dca41dd5-5747-42a1-8703-30ae549342b7","Type":"ContainerStarted","Data":"6072125e474e60da693adcb79b6a4c32eeb8d84b60e75ab9bcbb97075478915f"} Dec 15 05:50:31 crc kubenswrapper[4747]: W1215 05:50:31.431056 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd84a0b88_fbfb_4d28_89e0_5a64b4a1430f.slice/crio-49c93369aca76e9b94e7cdf324aaf36aa1094be46f6264f3c94f0947e787b8ea WatchSource:0}: Error finding container 49c93369aca76e9b94e7cdf324aaf36aa1094be46f6264f3c94f0947e787b8ea: Status 404 returned error can't find the container with id 49c93369aca76e9b94e7cdf324aaf36aa1094be46f6264f3c94f0947e787b8ea Dec 15 05:50:31 crc kubenswrapper[4747]: I1215 05:50:31.703960 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-pl88m"] Dec 15 05:50:31 crc kubenswrapper[4747]: I1215 05:50:31.705207 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-pl88m" Dec 15 05:50:31 crc kubenswrapper[4747]: I1215 05:50:31.711386 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Dec 15 05:50:31 crc kubenswrapper[4747]: I1215 05:50:31.724035 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-pl88m"] Dec 15 05:50:31 crc kubenswrapper[4747]: I1215 05:50:31.823340 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de8a2190-14e4-44fa-a3a7-18182a6b4df6-combined-ca-bundle\") pod \"ovn-controller-metrics-pl88m\" (UID: \"de8a2190-14e4-44fa-a3a7-18182a6b4df6\") " pod="openstack/ovn-controller-metrics-pl88m" Dec 15 05:50:31 crc kubenswrapper[4747]: I1215 05:50:31.823422 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2x9m\" (UniqueName: \"kubernetes.io/projected/de8a2190-14e4-44fa-a3a7-18182a6b4df6-kube-api-access-d2x9m\") pod \"ovn-controller-metrics-pl88m\" (UID: \"de8a2190-14e4-44fa-a3a7-18182a6b4df6\") " pod="openstack/ovn-controller-metrics-pl88m" Dec 15 05:50:31 crc kubenswrapper[4747]: I1215 05:50:31.823510 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de8a2190-14e4-44fa-a3a7-18182a6b4df6-config\") pod \"ovn-controller-metrics-pl88m\" (UID: \"de8a2190-14e4-44fa-a3a7-18182a6b4df6\") " pod="openstack/ovn-controller-metrics-pl88m" Dec 15 05:50:31 crc kubenswrapper[4747]: I1215 05:50:31.823574 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/de8a2190-14e4-44fa-a3a7-18182a6b4df6-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-pl88m\" (UID: \"de8a2190-14e4-44fa-a3a7-18182a6b4df6\") " pod="openstack/ovn-controller-metrics-pl88m" Dec 15 05:50:31 crc kubenswrapper[4747]: I1215 05:50:31.823678 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/de8a2190-14e4-44fa-a3a7-18182a6b4df6-ovn-rundir\") pod \"ovn-controller-metrics-pl88m\" (UID: \"de8a2190-14e4-44fa-a3a7-18182a6b4df6\") " pod="openstack/ovn-controller-metrics-pl88m" Dec 15 05:50:31 crc kubenswrapper[4747]: I1215 05:50:31.823749 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/de8a2190-14e4-44fa-a3a7-18182a6b4df6-ovs-rundir\") pod \"ovn-controller-metrics-pl88m\" (UID: \"de8a2190-14e4-44fa-a3a7-18182a6b4df6\") " pod="openstack/ovn-controller-metrics-pl88m" Dec 15 05:50:31 crc kubenswrapper[4747]: I1215 05:50:31.827681 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55dc666865-9qwv7"] Dec 15 05:50:31 crc kubenswrapper[4747]: I1215 05:50:31.858586 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-767d7fb4d9-hlwhp"] Dec 15 05:50:31 crc kubenswrapper[4747]: I1215 05:50:31.861322 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-767d7fb4d9-hlwhp" Dec 15 05:50:31 crc kubenswrapper[4747]: I1215 05:50:31.864560 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 15 05:50:31 crc kubenswrapper[4747]: I1215 05:50:31.876894 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-767d7fb4d9-hlwhp"] Dec 15 05:50:31 crc kubenswrapper[4747]: I1215 05:50:31.925570 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de8a2190-14e4-44fa-a3a7-18182a6b4df6-combined-ca-bundle\") pod \"ovn-controller-metrics-pl88m\" (UID: \"de8a2190-14e4-44fa-a3a7-18182a6b4df6\") " pod="openstack/ovn-controller-metrics-pl88m" Dec 15 05:50:31 crc kubenswrapper[4747]: I1215 05:50:31.925628 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2x9m\" (UniqueName: \"kubernetes.io/projected/de8a2190-14e4-44fa-a3a7-18182a6b4df6-kube-api-access-d2x9m\") pod \"ovn-controller-metrics-pl88m\" (UID: \"de8a2190-14e4-44fa-a3a7-18182a6b4df6\") " pod="openstack/ovn-controller-metrics-pl88m" Dec 15 05:50:31 crc kubenswrapper[4747]: I1215 05:50:31.925701 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hksl\" (UniqueName: \"kubernetes.io/projected/6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15-kube-api-access-5hksl\") pod \"dnsmasq-dns-767d7fb4d9-hlwhp\" (UID: \"6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15\") " pod="openstack/dnsmasq-dns-767d7fb4d9-hlwhp" Dec 15 05:50:31 crc kubenswrapper[4747]: I1215 05:50:31.925752 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de8a2190-14e4-44fa-a3a7-18182a6b4df6-config\") pod \"ovn-controller-metrics-pl88m\" (UID: \"de8a2190-14e4-44fa-a3a7-18182a6b4df6\") " pod="openstack/ovn-controller-metrics-pl88m" Dec 15 05:50:31 crc kubenswrapper[4747]: I1215 05:50:31.925800 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/de8a2190-14e4-44fa-a3a7-18182a6b4df6-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-pl88m\" (UID: \"de8a2190-14e4-44fa-a3a7-18182a6b4df6\") " pod="openstack/ovn-controller-metrics-pl88m" Dec 15 05:50:31 crc kubenswrapper[4747]: I1215 05:50:31.925817 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15-dns-svc\") pod \"dnsmasq-dns-767d7fb4d9-hlwhp\" (UID: \"6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15\") " pod="openstack/dnsmasq-dns-767d7fb4d9-hlwhp" Dec 15 05:50:31 crc kubenswrapper[4747]: I1215 05:50:31.925844 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/de8a2190-14e4-44fa-a3a7-18182a6b4df6-ovn-rundir\") pod \"ovn-controller-metrics-pl88m\" (UID: \"de8a2190-14e4-44fa-a3a7-18182a6b4df6\") " pod="openstack/ovn-controller-metrics-pl88m" Dec 15 05:50:31 crc kubenswrapper[4747]: I1215 05:50:31.925888 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/de8a2190-14e4-44fa-a3a7-18182a6b4df6-ovs-rundir\") pod \"ovn-controller-metrics-pl88m\" (UID: \"de8a2190-14e4-44fa-a3a7-18182a6b4df6\") " pod="openstack/ovn-controller-metrics-pl88m" Dec 15 05:50:31 crc kubenswrapper[4747]: I1215 05:50:31.925966 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15-config\") pod \"dnsmasq-dns-767d7fb4d9-hlwhp\" (UID: \"6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15\") " pod="openstack/dnsmasq-dns-767d7fb4d9-hlwhp" Dec 15 05:50:31 crc kubenswrapper[4747]: I1215 05:50:31.926028 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15-ovsdbserver-nb\") pod \"dnsmasq-dns-767d7fb4d9-hlwhp\" (UID: \"6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15\") " pod="openstack/dnsmasq-dns-767d7fb4d9-hlwhp" Dec 15 05:50:31 crc kubenswrapper[4747]: I1215 05:50:31.926897 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/de8a2190-14e4-44fa-a3a7-18182a6b4df6-ovn-rundir\") pod \"ovn-controller-metrics-pl88m\" (UID: \"de8a2190-14e4-44fa-a3a7-18182a6b4df6\") " pod="openstack/ovn-controller-metrics-pl88m" Dec 15 05:50:31 crc kubenswrapper[4747]: I1215 05:50:31.926915 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/de8a2190-14e4-44fa-a3a7-18182a6b4df6-ovs-rundir\") pod \"ovn-controller-metrics-pl88m\" (UID: \"de8a2190-14e4-44fa-a3a7-18182a6b4df6\") " pod="openstack/ovn-controller-metrics-pl88m" Dec 15 05:50:31 crc kubenswrapper[4747]: I1215 05:50:31.927284 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de8a2190-14e4-44fa-a3a7-18182a6b4df6-config\") pod \"ovn-controller-metrics-pl88m\" (UID: \"de8a2190-14e4-44fa-a3a7-18182a6b4df6\") " pod="openstack/ovn-controller-metrics-pl88m" Dec 15 05:50:31 crc kubenswrapper[4747]: I1215 05:50:31.933832 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/de8a2190-14e4-44fa-a3a7-18182a6b4df6-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-pl88m\" (UID: \"de8a2190-14e4-44fa-a3a7-18182a6b4df6\") " pod="openstack/ovn-controller-metrics-pl88m" Dec 15 05:50:31 crc kubenswrapper[4747]: I1215 05:50:31.938906 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de8a2190-14e4-44fa-a3a7-18182a6b4df6-combined-ca-bundle\") pod \"ovn-controller-metrics-pl88m\" (UID: \"de8a2190-14e4-44fa-a3a7-18182a6b4df6\") " pod="openstack/ovn-controller-metrics-pl88m" Dec 15 05:50:31 crc kubenswrapper[4747]: I1215 05:50:31.940212 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2x9m\" (UniqueName: \"kubernetes.io/projected/de8a2190-14e4-44fa-a3a7-18182a6b4df6-kube-api-access-d2x9m\") pod \"ovn-controller-metrics-pl88m\" (UID: \"de8a2190-14e4-44fa-a3a7-18182a6b4df6\") " pod="openstack/ovn-controller-metrics-pl88m" Dec 15 05:50:32 crc kubenswrapper[4747]: I1215 05:50:32.030067 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15-config\") pod \"dnsmasq-dns-767d7fb4d9-hlwhp\" (UID: \"6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15\") " pod="openstack/dnsmasq-dns-767d7fb4d9-hlwhp" Dec 15 05:50:32 crc kubenswrapper[4747]: I1215 05:50:32.030154 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15-ovsdbserver-nb\") pod \"dnsmasq-dns-767d7fb4d9-hlwhp\" (UID: \"6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15\") " pod="openstack/dnsmasq-dns-767d7fb4d9-hlwhp" Dec 15 05:50:32 crc kubenswrapper[4747]: I1215 05:50:32.030239 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hksl\" (UniqueName: \"kubernetes.io/projected/6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15-kube-api-access-5hksl\") pod \"dnsmasq-dns-767d7fb4d9-hlwhp\" (UID: \"6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15\") " pod="openstack/dnsmasq-dns-767d7fb4d9-hlwhp" Dec 15 05:50:32 crc kubenswrapper[4747]: I1215 05:50:32.030324 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15-dns-svc\") pod \"dnsmasq-dns-767d7fb4d9-hlwhp\" (UID: \"6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15\") " pod="openstack/dnsmasq-dns-767d7fb4d9-hlwhp" Dec 15 05:50:32 crc kubenswrapper[4747]: I1215 05:50:32.032057 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15-config\") pod \"dnsmasq-dns-767d7fb4d9-hlwhp\" (UID: \"6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15\") " pod="openstack/dnsmasq-dns-767d7fb4d9-hlwhp" Dec 15 05:50:32 crc kubenswrapper[4747]: I1215 05:50:32.032178 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15-ovsdbserver-nb\") pod \"dnsmasq-dns-767d7fb4d9-hlwhp\" (UID: \"6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15\") " pod="openstack/dnsmasq-dns-767d7fb4d9-hlwhp" Dec 15 05:50:32 crc kubenswrapper[4747]: I1215 05:50:32.033304 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15-dns-svc\") pod \"dnsmasq-dns-767d7fb4d9-hlwhp\" (UID: \"6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15\") " pod="openstack/dnsmasq-dns-767d7fb4d9-hlwhp" Dec 15 05:50:32 crc kubenswrapper[4747]: I1215 05:50:32.036667 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-pl88m" Dec 15 05:50:32 crc kubenswrapper[4747]: I1215 05:50:32.046384 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hksl\" (UniqueName: \"kubernetes.io/projected/6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15-kube-api-access-5hksl\") pod \"dnsmasq-dns-767d7fb4d9-hlwhp\" (UID: \"6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15\") " pod="openstack/dnsmasq-dns-767d7fb4d9-hlwhp" Dec 15 05:50:32 crc kubenswrapper[4747]: I1215 05:50:32.188084 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-767d7fb4d9-hlwhp" Dec 15 05:50:32 crc kubenswrapper[4747]: I1215 05:50:32.425123 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-pl88m"] Dec 15 05:50:32 crc kubenswrapper[4747]: I1215 05:50:32.426146 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"d84a0b88-fbfb-4d28-89e0-5a64b4a1430f","Type":"ContainerStarted","Data":"49c93369aca76e9b94e7cdf324aaf36aa1094be46f6264f3c94f0947e787b8ea"} Dec 15 05:50:32 crc kubenswrapper[4747]: I1215 05:50:32.594911 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-767d7fb4d9-hlwhp"] Dec 15 05:50:33 crc kubenswrapper[4747]: I1215 05:50:33.436018 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-pl88m" event={"ID":"de8a2190-14e4-44fa-a3a7-18182a6b4df6","Type":"ContainerStarted","Data":"462ff08cdb5e2c2bf6b3505ef8d9592e9ce453f1feec92db0bc4fc6ec5d9d91f"} Dec 15 05:50:33 crc kubenswrapper[4747]: I1215 05:50:33.771203 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 15 05:50:33 crc kubenswrapper[4747]: I1215 05:50:33.772971 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 15 05:50:33 crc kubenswrapper[4747]: I1215 05:50:33.778377 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Dec 15 05:50:33 crc kubenswrapper[4747]: I1215 05:50:33.778493 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Dec 15 05:50:33 crc kubenswrapper[4747]: I1215 05:50:33.778597 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Dec 15 05:50:33 crc kubenswrapper[4747]: I1215 05:50:33.778715 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-pwxmd" Dec 15 05:50:33 crc kubenswrapper[4747]: I1215 05:50:33.787963 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 15 05:50:33 crc kubenswrapper[4747]: I1215 05:50:33.868135 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/633ee263-eae2-4211-ae9e-d0efd7f7ac2f-config\") pod \"ovsdbserver-sb-0\" (UID: \"633ee263-eae2-4211-ae9e-d0efd7f7ac2f\") " pod="openstack/ovsdbserver-sb-0" Dec 15 05:50:33 crc kubenswrapper[4747]: I1215 05:50:33.868175 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/633ee263-eae2-4211-ae9e-d0efd7f7ac2f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"633ee263-eae2-4211-ae9e-d0efd7f7ac2f\") " pod="openstack/ovsdbserver-sb-0" Dec 15 05:50:33 crc kubenswrapper[4747]: I1215 05:50:33.868228 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8spnr\" (UniqueName: \"kubernetes.io/projected/633ee263-eae2-4211-ae9e-d0efd7f7ac2f-kube-api-access-8spnr\") pod \"ovsdbserver-sb-0\" (UID: \"633ee263-eae2-4211-ae9e-d0efd7f7ac2f\") " pod="openstack/ovsdbserver-sb-0" Dec 15 05:50:33 crc kubenswrapper[4747]: I1215 05:50:33.868257 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/633ee263-eae2-4211-ae9e-d0efd7f7ac2f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"633ee263-eae2-4211-ae9e-d0efd7f7ac2f\") " pod="openstack/ovsdbserver-sb-0" Dec 15 05:50:33 crc kubenswrapper[4747]: I1215 05:50:33.868295 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/633ee263-eae2-4211-ae9e-d0efd7f7ac2f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"633ee263-eae2-4211-ae9e-d0efd7f7ac2f\") " pod="openstack/ovsdbserver-sb-0" Dec 15 05:50:33 crc kubenswrapper[4747]: I1215 05:50:33.868332 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/633ee263-eae2-4211-ae9e-d0efd7f7ac2f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"633ee263-eae2-4211-ae9e-d0efd7f7ac2f\") " pod="openstack/ovsdbserver-sb-0" Dec 15 05:50:33 crc kubenswrapper[4747]: I1215 05:50:33.868385 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/633ee263-eae2-4211-ae9e-d0efd7f7ac2f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"633ee263-eae2-4211-ae9e-d0efd7f7ac2f\") " pod="openstack/ovsdbserver-sb-0" Dec 15 05:50:33 crc kubenswrapper[4747]: I1215 05:50:33.868402 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"633ee263-eae2-4211-ae9e-d0efd7f7ac2f\") " pod="openstack/ovsdbserver-sb-0" Dec 15 05:50:33 crc kubenswrapper[4747]: I1215 05:50:33.972421 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/633ee263-eae2-4211-ae9e-d0efd7f7ac2f-config\") pod \"ovsdbserver-sb-0\" (UID: \"633ee263-eae2-4211-ae9e-d0efd7f7ac2f\") " pod="openstack/ovsdbserver-sb-0" Dec 15 05:50:33 crc kubenswrapper[4747]: I1215 05:50:33.972457 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/633ee263-eae2-4211-ae9e-d0efd7f7ac2f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"633ee263-eae2-4211-ae9e-d0efd7f7ac2f\") " pod="openstack/ovsdbserver-sb-0" Dec 15 05:50:33 crc kubenswrapper[4747]: I1215 05:50:33.972495 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8spnr\" (UniqueName: \"kubernetes.io/projected/633ee263-eae2-4211-ae9e-d0efd7f7ac2f-kube-api-access-8spnr\") pod \"ovsdbserver-sb-0\" (UID: \"633ee263-eae2-4211-ae9e-d0efd7f7ac2f\") " pod="openstack/ovsdbserver-sb-0" Dec 15 05:50:33 crc kubenswrapper[4747]: I1215 05:50:33.972521 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/633ee263-eae2-4211-ae9e-d0efd7f7ac2f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"633ee263-eae2-4211-ae9e-d0efd7f7ac2f\") " pod="openstack/ovsdbserver-sb-0" Dec 15 05:50:33 crc kubenswrapper[4747]: I1215 05:50:33.972547 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/633ee263-eae2-4211-ae9e-d0efd7f7ac2f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"633ee263-eae2-4211-ae9e-d0efd7f7ac2f\") " pod="openstack/ovsdbserver-sb-0" Dec 15 05:50:33 crc kubenswrapper[4747]: I1215 05:50:33.972577 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/633ee263-eae2-4211-ae9e-d0efd7f7ac2f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"633ee263-eae2-4211-ae9e-d0efd7f7ac2f\") " pod="openstack/ovsdbserver-sb-0" Dec 15 05:50:33 crc kubenswrapper[4747]: I1215 05:50:33.972613 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/633ee263-eae2-4211-ae9e-d0efd7f7ac2f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"633ee263-eae2-4211-ae9e-d0efd7f7ac2f\") " pod="openstack/ovsdbserver-sb-0" Dec 15 05:50:33 crc kubenswrapper[4747]: I1215 05:50:33.972636 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"633ee263-eae2-4211-ae9e-d0efd7f7ac2f\") " pod="openstack/ovsdbserver-sb-0" Dec 15 05:50:33 crc kubenswrapper[4747]: I1215 05:50:33.972907 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"633ee263-eae2-4211-ae9e-d0efd7f7ac2f\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-sb-0" Dec 15 05:50:33 crc kubenswrapper[4747]: I1215 05:50:33.973214 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/633ee263-eae2-4211-ae9e-d0efd7f7ac2f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"633ee263-eae2-4211-ae9e-d0efd7f7ac2f\") " pod="openstack/ovsdbserver-sb-0" Dec 15 05:50:33 crc kubenswrapper[4747]: I1215 05:50:33.973747 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/633ee263-eae2-4211-ae9e-d0efd7f7ac2f-config\") pod \"ovsdbserver-sb-0\" (UID: \"633ee263-eae2-4211-ae9e-d0efd7f7ac2f\") " pod="openstack/ovsdbserver-sb-0" Dec 15 05:50:33 crc kubenswrapper[4747]: I1215 05:50:33.973986 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/633ee263-eae2-4211-ae9e-d0efd7f7ac2f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"633ee263-eae2-4211-ae9e-d0efd7f7ac2f\") " pod="openstack/ovsdbserver-sb-0" Dec 15 05:50:33 crc kubenswrapper[4747]: I1215 05:50:33.980473 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/633ee263-eae2-4211-ae9e-d0efd7f7ac2f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"633ee263-eae2-4211-ae9e-d0efd7f7ac2f\") " pod="openstack/ovsdbserver-sb-0" Dec 15 05:50:33 crc kubenswrapper[4747]: I1215 05:50:33.981586 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/633ee263-eae2-4211-ae9e-d0efd7f7ac2f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"633ee263-eae2-4211-ae9e-d0efd7f7ac2f\") " pod="openstack/ovsdbserver-sb-0" Dec 15 05:50:33 crc kubenswrapper[4747]: I1215 05:50:33.985721 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/633ee263-eae2-4211-ae9e-d0efd7f7ac2f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"633ee263-eae2-4211-ae9e-d0efd7f7ac2f\") " pod="openstack/ovsdbserver-sb-0" Dec 15 05:50:33 crc kubenswrapper[4747]: I1215 05:50:33.988316 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8spnr\" (UniqueName: \"kubernetes.io/projected/633ee263-eae2-4211-ae9e-d0efd7f7ac2f-kube-api-access-8spnr\") pod \"ovsdbserver-sb-0\" (UID: \"633ee263-eae2-4211-ae9e-d0efd7f7ac2f\") " pod="openstack/ovsdbserver-sb-0" Dec 15 05:50:33 crc kubenswrapper[4747]: I1215 05:50:33.989828 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"633ee263-eae2-4211-ae9e-d0efd7f7ac2f\") " pod="openstack/ovsdbserver-sb-0" Dec 15 05:50:34 crc kubenswrapper[4747]: I1215 05:50:34.101192 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 15 05:50:35 crc kubenswrapper[4747]: W1215 05:50:35.723275 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6dce00fa_ec1d_4bba_a8b2_a0b41ce4ad15.slice/crio-dde568c464b88fca4f2ff2be787c1921d1925e2d4e6e23c94748071de6389419 WatchSource:0}: Error finding container dde568c464b88fca4f2ff2be787c1921d1925e2d4e6e23c94748071de6389419: Status 404 returned error can't find the container with id dde568c464b88fca4f2ff2be787c1921d1925e2d4e6e23c94748071de6389419 Dec 15 05:50:36 crc kubenswrapper[4747]: I1215 05:50:36.460998 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-767d7fb4d9-hlwhp" event={"ID":"6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15","Type":"ContainerStarted","Data":"dde568c464b88fca4f2ff2be787c1921d1925e2d4e6e23c94748071de6389419"} Dec 15 05:50:36 crc kubenswrapper[4747]: I1215 05:50:36.537039 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 15 05:50:47 crc kubenswrapper[4747]: E1215 05:50:47.976657 4747 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos9/openstack-rabbitmq:2e38c527ddf6e767040136ecf014e7b9" Dec 15 05:50:47 crc kubenswrapper[4747]: E1215 05:50:47.977201 4747 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos9/openstack-rabbitmq:2e38c527ddf6e767040136ecf014e7b9" Dec 15 05:50:47 crc kubenswrapper[4747]: E1215 05:50:47.977354 4747 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.rdoproject.org/podified-master-centos9/openstack-rabbitmq:2e38c527ddf6e767040136ecf014e7b9,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9qb9m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(65a53faf-94ad-48f3-b8e0-8642376f89ee): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 15 05:50:47 crc kubenswrapper[4747]: E1215 05:50:47.979254 4747 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos9/openstack-rabbitmq:2e38c527ddf6e767040136ecf014e7b9" Dec 15 05:50:47 crc kubenswrapper[4747]: E1215 05:50:47.979276 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="65a53faf-94ad-48f3-b8e0-8642376f89ee" Dec 15 05:50:47 crc kubenswrapper[4747]: E1215 05:50:47.979303 4747 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos9/openstack-rabbitmq:2e38c527ddf6e767040136ecf014e7b9" Dec 15 05:50:47 crc kubenswrapper[4747]: E1215 05:50:47.979425 4747 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.rdoproject.org/podified-master-centos9/openstack-rabbitmq:2e38c527ddf6e767040136ecf014e7b9,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sc6cq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(9bece5e6-b345-4969-a563-81fb3706f8f1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 15 05:50:47 crc kubenswrapper[4747]: E1215 05:50:47.981836 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="9bece5e6-b345-4969-a563-81fb3706f8f1" Dec 15 05:50:48 crc kubenswrapper[4747]: E1215 05:50:48.561045 4747 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos9/openstack-neutron-server:2e38c527ddf6e767040136ecf014e7b9" Dec 15 05:50:48 crc kubenswrapper[4747]: E1215 05:50:48.561125 4747 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos9/openstack-neutron-server:2e38c527ddf6e767040136ecf014e7b9" Dec 15 05:50:48 crc kubenswrapper[4747]: E1215 05:50:48.561312 4747 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-master-centos9/openstack-neutron-server:2e38c527ddf6e767040136ecf014e7b9,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9jw5j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-94b4f9f45-ng7wf_openstack(c5217f8b-c2c7-4600-8137-b3367ec052ad): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 15 05:50:48 crc kubenswrapper[4747]: E1215 05:50:48.563315 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-94b4f9f45-ng7wf" podUID="c5217f8b-c2c7-4600-8137-b3367ec052ad" Dec 15 05:50:48 crc kubenswrapper[4747]: E1215 05:50:48.563866 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos9/openstack-rabbitmq:2e38c527ddf6e767040136ecf014e7b9\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="65a53faf-94ad-48f3-b8e0-8642376f89ee" Dec 15 05:50:48 crc kubenswrapper[4747]: E1215 05:50:48.563942 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos9/openstack-rabbitmq:2e38c527ddf6e767040136ecf014e7b9\\\"\"" pod="openstack/rabbitmq-server-0" podUID="9bece5e6-b345-4969-a563-81fb3706f8f1" Dec 15 05:50:50 crc kubenswrapper[4747]: E1215 05:50:50.014522 4747 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos9/openstack-mariadb:2e38c527ddf6e767040136ecf014e7b9" Dec 15 05:50:50 crc kubenswrapper[4747]: E1215 05:50:50.014882 4747 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos9/openstack-mariadb:2e38c527ddf6e767040136ecf014e7b9" Dec 15 05:50:50 crc kubenswrapper[4747]: E1215 05:50:50.015070 4747 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.rdoproject.org/podified-master-centos9/openstack-mariadb:2e38c527ddf6e767040136ecf014e7b9,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5fcb5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(f242c5ef-84fc-4437-86a0-0175e8ea123b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 15 05:50:50 crc kubenswrapper[4747]: E1215 05:50:50.016712 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="f242c5ef-84fc-4437-86a0-0175e8ea123b" Dec 15 05:50:50 crc kubenswrapper[4747]: E1215 05:50:50.030270 4747 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos9/openstack-neutron-server:2e38c527ddf6e767040136ecf014e7b9" Dec 15 05:50:50 crc kubenswrapper[4747]: E1215 05:50:50.030335 4747 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos9/openstack-neutron-server:2e38c527ddf6e767040136ecf014e7b9" Dec 15 05:50:50 crc kubenswrapper[4747]: E1215 05:50:50.030490 4747 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-master-centos9/openstack-neutron-server:2e38c527ddf6e767040136ecf014e7b9,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7chqg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5d9886d5bf-r27p4_openstack(45ad3e3b-7313-4633-9da2-b644eefbad5a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 15 05:50:50 crc kubenswrapper[4747]: E1215 05:50:50.031713 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5d9886d5bf-r27p4" podUID="45ad3e3b-7313-4633-9da2-b644eefbad5a" Dec 15 05:50:50 crc kubenswrapper[4747]: E1215 05:50:50.581469 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos9/openstack-neutron-server:2e38c527ddf6e767040136ecf014e7b9\\\"\"" pod="openstack/dnsmasq-dns-5d9886d5bf-r27p4" podUID="45ad3e3b-7313-4633-9da2-b644eefbad5a" Dec 15 05:50:50 crc kubenswrapper[4747]: E1215 05:50:50.581604 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos9/openstack-mariadb:2e38c527ddf6e767040136ecf014e7b9\\\"\"" pod="openstack/openstack-galera-0" podUID="f242c5ef-84fc-4437-86a0-0175e8ea123b" Dec 15 05:50:51 crc kubenswrapper[4747]: E1215 05:50:51.649617 4747 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos9/openstack-memcached:2e38c527ddf6e767040136ecf014e7b9" Dec 15 05:50:51 crc kubenswrapper[4747]: E1215 05:50:51.650158 4747 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos9/openstack-memcached:2e38c527ddf6e767040136ecf014e7b9" Dec 15 05:50:51 crc kubenswrapper[4747]: E1215 05:50:51.650421 4747 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.rdoproject.org/podified-master-centos9/openstack-memcached:2e38c527ddf6e767040136ecf014e7b9,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n5bch586h565hb9hd4h5fh6h7bh68ch5f8h684h549h88h554h5dh59dh554h5f6h56h54dh78h79h586h7bhb6hcdh666h5fbh68bh5f8h56dh5cbq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9tcm8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(8828a0c4-9d91-45ba-a6f7-3bd720a9596b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 15 05:50:51 crc kubenswrapper[4747]: E1215 05:50:51.651667 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="8828a0c4-9d91-45ba-a6f7-3bd720a9596b" Dec 15 05:50:51 crc kubenswrapper[4747]: E1215 05:50:51.704592 4747 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos9/openstack-neutron-server:2e38c527ddf6e767040136ecf014e7b9" Dec 15 05:50:51 crc kubenswrapper[4747]: E1215 05:50:51.704650 4747 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos9/openstack-neutron-server:2e38c527ddf6e767040136ecf014e7b9" Dec 15 05:50:51 crc kubenswrapper[4747]: E1215 05:50:51.704784 4747 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-master-centos9/openstack-neutron-server:2e38c527ddf6e767040136ecf014e7b9,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cpsrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6947456757-77mcq_openstack(2ba5bb5e-fcb8-46c5-861c-b0e72dee5308): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 15 05:50:51 crc kubenswrapper[4747]: E1215 05:50:51.705989 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-6947456757-77mcq" podUID="2ba5bb5e-fcb8-46c5-861c-b0e72dee5308" Dec 15 05:50:51 crc kubenswrapper[4747]: E1215 05:50:51.708315 4747 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos9/openstack-neutron-server:2e38c527ddf6e767040136ecf014e7b9" Dec 15 05:50:51 crc kubenswrapper[4747]: E1215 05:50:51.708411 4747 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos9/openstack-neutron-server:2e38c527ddf6e767040136ecf014e7b9" Dec 15 05:50:51 crc kubenswrapper[4747]: E1215 05:50:51.708515 4747 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-master-centos9/openstack-neutron-server:2e38c527ddf6e767040136ecf014e7b9,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fdpxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-55dc666865-9qwv7_openstack(35d8d030-307c-47c1-97ce-b476cf1d3cf2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 15 05:50:51 crc kubenswrapper[4747]: E1215 05:50:51.709687 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-55dc666865-9qwv7" podUID="35d8d030-307c-47c1-97ce-b476cf1d3cf2" Dec 15 05:50:52 crc kubenswrapper[4747]: I1215 05:50:52.093114 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 15 05:50:52 crc kubenswrapper[4747]: I1215 05:50:52.598318 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-94b4f9f45-ng7wf" event={"ID":"c5217f8b-c2c7-4600-8137-b3367ec052ad","Type":"ContainerDied","Data":"7ae350d14bfae3628aa419b95314c37e7627bad64f609f440e606a1cd8100469"} Dec 15 05:50:52 crc kubenswrapper[4747]: I1215 05:50:52.598386 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ae350d14bfae3628aa419b95314c37e7627bad64f609f440e606a1cd8100469" Dec 15 05:50:52 crc kubenswrapper[4747]: I1215 05:50:52.599347 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-94b4f9f45-ng7wf" Dec 15 05:50:52 crc kubenswrapper[4747]: E1215 05:50:52.602036 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos9/openstack-memcached:2e38c527ddf6e767040136ecf014e7b9\\\"\"" pod="openstack/memcached-0" podUID="8828a0c4-9d91-45ba-a6f7-3bd720a9596b" Dec 15 05:50:52 crc kubenswrapper[4747]: I1215 05:50:52.682708 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jw5j\" (UniqueName: \"kubernetes.io/projected/c5217f8b-c2c7-4600-8137-b3367ec052ad-kube-api-access-9jw5j\") pod \"c5217f8b-c2c7-4600-8137-b3367ec052ad\" (UID: \"c5217f8b-c2c7-4600-8137-b3367ec052ad\") " Dec 15 05:50:52 crc kubenswrapper[4747]: I1215 05:50:52.682806 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5217f8b-c2c7-4600-8137-b3367ec052ad-config\") pod \"c5217f8b-c2c7-4600-8137-b3367ec052ad\" (UID: \"c5217f8b-c2c7-4600-8137-b3367ec052ad\") " Dec 15 05:50:52 crc kubenswrapper[4747]: I1215 05:50:52.684800 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5217f8b-c2c7-4600-8137-b3367ec052ad-config" (OuterVolumeSpecName: "config") pod "c5217f8b-c2c7-4600-8137-b3367ec052ad" (UID: "c5217f8b-c2c7-4600-8137-b3367ec052ad"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:50:52 crc kubenswrapper[4747]: I1215 05:50:52.694642 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5217f8b-c2c7-4600-8137-b3367ec052ad-kube-api-access-9jw5j" (OuterVolumeSpecName: "kube-api-access-9jw5j") pod "c5217f8b-c2c7-4600-8137-b3367ec052ad" (UID: "c5217f8b-c2c7-4600-8137-b3367ec052ad"). InnerVolumeSpecName "kube-api-access-9jw5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:50:52 crc kubenswrapper[4747]: I1215 05:50:52.785285 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jw5j\" (UniqueName: \"kubernetes.io/projected/c5217f8b-c2c7-4600-8137-b3367ec052ad-kube-api-access-9jw5j\") on node \"crc\" DevicePath \"\"" Dec 15 05:50:52 crc kubenswrapper[4747]: I1215 05:50:52.785490 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5217f8b-c2c7-4600-8137-b3367ec052ad-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:50:52 crc kubenswrapper[4747]: E1215 05:50:52.818160 4747 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos9/openstack-ovn-controller:2e38c527ddf6e767040136ecf014e7b9" Dec 15 05:50:52 crc kubenswrapper[4747]: E1215 05:50:52.818226 4747 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos9/openstack-ovn-controller:2e38c527ddf6e767040136ecf014e7b9" Dec 15 05:50:52 crc kubenswrapper[4747]: E1215 05:50:52.818432 4747 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:quay.rdoproject.org/podified-master-centos9/openstack-ovn-controller:2e38c527ddf6e767040136ecf014e7b9,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n656h676hbfh599hb6h667h546h677hcch658h5bbh544h67fh5bfh7bh578hf4h54dh5b5h5h86h54ch87h65hf5h65bh597h5d7h58ch556h6dh684q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zn58h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-b65n4_openstack(becaa3b6-8cd5-4e55-9a81-0a21fec0a70b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 15 05:50:52 crc kubenswrapper[4747]: E1215 05:50:52.819616 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-b65n4" podUID="becaa3b6-8cd5-4e55-9a81-0a21fec0a70b" Dec 15 05:50:53 crc kubenswrapper[4747]: W1215 05:50:53.023393 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod633ee263_eae2_4211_ae9e_d0efd7f7ac2f.slice/crio-ebbe3503734ece24c0acecc8b2521919acb2a01930afa5a2e8c4b18e7ca6782b WatchSource:0}: Error finding container ebbe3503734ece24c0acecc8b2521919acb2a01930afa5a2e8c4b18e7ca6782b: Status 404 returned error can't find the container with id ebbe3503734ece24c0acecc8b2521919acb2a01930afa5a2e8c4b18e7ca6782b Dec 15 05:50:53 crc kubenswrapper[4747]: I1215 05:50:53.244376 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55dc666865-9qwv7" Dec 15 05:50:53 crc kubenswrapper[4747]: I1215 05:50:53.249094 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6947456757-77mcq" Dec 15 05:50:53 crc kubenswrapper[4747]: I1215 05:50:53.305613 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ba5bb5e-fcb8-46c5-861c-b0e72dee5308-config\") pod \"2ba5bb5e-fcb8-46c5-861c-b0e72dee5308\" (UID: \"2ba5bb5e-fcb8-46c5-861c-b0e72dee5308\") " Dec 15 05:50:53 crc kubenswrapper[4747]: I1215 05:50:53.305691 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdpxn\" (UniqueName: \"kubernetes.io/projected/35d8d030-307c-47c1-97ce-b476cf1d3cf2-kube-api-access-fdpxn\") pod \"35d8d030-307c-47c1-97ce-b476cf1d3cf2\" (UID: \"35d8d030-307c-47c1-97ce-b476cf1d3cf2\") " Dec 15 05:50:53 crc kubenswrapper[4747]: I1215 05:50:53.305764 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ba5bb5e-fcb8-46c5-861c-b0e72dee5308-dns-svc\") pod \"2ba5bb5e-fcb8-46c5-861c-b0e72dee5308\" (UID: \"2ba5bb5e-fcb8-46c5-861c-b0e72dee5308\") " Dec 15 05:50:53 crc kubenswrapper[4747]: I1215 05:50:53.305805 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35d8d030-307c-47c1-97ce-b476cf1d3cf2-dns-svc\") pod \"35d8d030-307c-47c1-97ce-b476cf1d3cf2\" (UID: \"35d8d030-307c-47c1-97ce-b476cf1d3cf2\") " Dec 15 05:50:53 crc kubenswrapper[4747]: I1215 05:50:53.305826 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35d8d030-307c-47c1-97ce-b476cf1d3cf2-config\") pod \"35d8d030-307c-47c1-97ce-b476cf1d3cf2\" (UID: \"35d8d030-307c-47c1-97ce-b476cf1d3cf2\") " Dec 15 05:50:53 crc kubenswrapper[4747]: I1215 05:50:53.305907 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpsrz\" (UniqueName: \"kubernetes.io/projected/2ba5bb5e-fcb8-46c5-861c-b0e72dee5308-kube-api-access-cpsrz\") pod \"2ba5bb5e-fcb8-46c5-861c-b0e72dee5308\" (UID: \"2ba5bb5e-fcb8-46c5-861c-b0e72dee5308\") " Dec 15 05:50:53 crc kubenswrapper[4747]: I1215 05:50:53.306257 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35d8d030-307c-47c1-97ce-b476cf1d3cf2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "35d8d030-307c-47c1-97ce-b476cf1d3cf2" (UID: "35d8d030-307c-47c1-97ce-b476cf1d3cf2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:50:53 crc kubenswrapper[4747]: I1215 05:50:53.306479 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ba5bb5e-fcb8-46c5-861c-b0e72dee5308-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2ba5bb5e-fcb8-46c5-861c-b0e72dee5308" (UID: "2ba5bb5e-fcb8-46c5-861c-b0e72dee5308"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:50:53 crc kubenswrapper[4747]: I1215 05:50:53.306795 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35d8d030-307c-47c1-97ce-b476cf1d3cf2-config" (OuterVolumeSpecName: "config") pod "35d8d030-307c-47c1-97ce-b476cf1d3cf2" (UID: "35d8d030-307c-47c1-97ce-b476cf1d3cf2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:50:53 crc kubenswrapper[4747]: I1215 05:50:53.306989 4747 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ba5bb5e-fcb8-46c5-861c-b0e72dee5308-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 15 05:50:53 crc kubenswrapper[4747]: I1215 05:50:53.307006 4747 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35d8d030-307c-47c1-97ce-b476cf1d3cf2-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 15 05:50:53 crc kubenswrapper[4747]: I1215 05:50:53.307016 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35d8d030-307c-47c1-97ce-b476cf1d3cf2-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:50:53 crc kubenswrapper[4747]: I1215 05:50:53.307273 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ba5bb5e-fcb8-46c5-861c-b0e72dee5308-config" (OuterVolumeSpecName: "config") pod "2ba5bb5e-fcb8-46c5-861c-b0e72dee5308" (UID: "2ba5bb5e-fcb8-46c5-861c-b0e72dee5308"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:50:53 crc kubenswrapper[4747]: I1215 05:50:53.309053 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ba5bb5e-fcb8-46c5-861c-b0e72dee5308-kube-api-access-cpsrz" (OuterVolumeSpecName: "kube-api-access-cpsrz") pod "2ba5bb5e-fcb8-46c5-861c-b0e72dee5308" (UID: "2ba5bb5e-fcb8-46c5-861c-b0e72dee5308"). InnerVolumeSpecName "kube-api-access-cpsrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:50:53 crc kubenswrapper[4747]: I1215 05:50:53.310303 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35d8d030-307c-47c1-97ce-b476cf1d3cf2-kube-api-access-fdpxn" (OuterVolumeSpecName: "kube-api-access-fdpxn") pod "35d8d030-307c-47c1-97ce-b476cf1d3cf2" (UID: "35d8d030-307c-47c1-97ce-b476cf1d3cf2"). InnerVolumeSpecName "kube-api-access-fdpxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:50:53 crc kubenswrapper[4747]: I1215 05:50:53.408942 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ba5bb5e-fcb8-46c5-861c-b0e72dee5308-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:50:53 crc kubenswrapper[4747]: I1215 05:50:53.408978 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdpxn\" (UniqueName: \"kubernetes.io/projected/35d8d030-307c-47c1-97ce-b476cf1d3cf2-kube-api-access-fdpxn\") on node \"crc\" DevicePath \"\"" Dec 15 05:50:53 crc kubenswrapper[4747]: I1215 05:50:53.408993 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpsrz\" (UniqueName: \"kubernetes.io/projected/2ba5bb5e-fcb8-46c5-861c-b0e72dee5308-kube-api-access-cpsrz\") on node \"crc\" DevicePath \"\"" Dec 15 05:50:53 crc kubenswrapper[4747]: I1215 05:50:53.610718 4747 generic.go:334] "Generic (PLEG): container finished" podID="6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15" containerID="c7d3d549eab1de97c6dacd7882eb2effec271fdb15fb17549a9375c222cbc8c7" exitCode=0 Dec 15 05:50:53 crc kubenswrapper[4747]: I1215 05:50:53.610819 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-767d7fb4d9-hlwhp" event={"ID":"6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15","Type":"ContainerDied","Data":"c7d3d549eab1de97c6dacd7882eb2effec271fdb15fb17549a9375c222cbc8c7"} Dec 15 05:50:53 crc kubenswrapper[4747]: I1215 05:50:53.617536 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6947456757-77mcq" event={"ID":"2ba5bb5e-fcb8-46c5-861c-b0e72dee5308","Type":"ContainerDied","Data":"2ce44351b6402f7d1a058a2e3210e3b9f4d31dc5d95344c38bc19171764e377c"} Dec 15 05:50:53 crc kubenswrapper[4747]: I1215 05:50:53.617719 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6947456757-77mcq" Dec 15 05:50:53 crc kubenswrapper[4747]: I1215 05:50:53.623736 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"d84a0b88-fbfb-4d28-89e0-5a64b4a1430f","Type":"ContainerStarted","Data":"63b2d44170a56991420f19f91c00fd154e351abef2f8bab34b1aeb0704cbc7ba"} Dec 15 05:50:53 crc kubenswrapper[4747]: I1215 05:50:53.631254 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"22da0dca-a59a-40f7-8dd2-95305eea5ee0","Type":"ContainerStarted","Data":"4eaa3a7c4a99a982618a4f9e048671e21efc050e772c65f71eb05045128057f0"} Dec 15 05:50:53 crc kubenswrapper[4747]: I1215 05:50:53.633094 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55dc666865-9qwv7" event={"ID":"35d8d030-307c-47c1-97ce-b476cf1d3cf2","Type":"ContainerDied","Data":"ba08a20f5bdfc9d45706a5cf0e5287d8df4249d7491eff3de55a9b267d2ab414"} Dec 15 05:50:53 crc kubenswrapper[4747]: I1215 05:50:53.635020 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"633ee263-eae2-4211-ae9e-d0efd7f7ac2f","Type":"ContainerStarted","Data":"ebbe3503734ece24c0acecc8b2521919acb2a01930afa5a2e8c4b18e7ca6782b"} Dec 15 05:50:53 crc kubenswrapper[4747]: I1215 05:50:53.635464 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55dc666865-9qwv7" Dec 15 05:50:53 crc kubenswrapper[4747]: I1215 05:50:53.637631 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-94b4f9f45-ng7wf" Dec 15 05:50:53 crc kubenswrapper[4747]: E1215 05:50:53.640884 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos9/openstack-ovn-controller:2e38c527ddf6e767040136ecf014e7b9\\\"\"" pod="openstack/ovn-controller-b65n4" podUID="becaa3b6-8cd5-4e55-9a81-0a21fec0a70b" Dec 15 05:50:53 crc kubenswrapper[4747]: I1215 05:50:53.737382 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6947456757-77mcq"] Dec 15 05:50:53 crc kubenswrapper[4747]: I1215 05:50:53.744807 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6947456757-77mcq"] Dec 15 05:50:53 crc kubenswrapper[4747]: I1215 05:50:53.762150 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-94b4f9f45-ng7wf"] Dec 15 05:50:53 crc kubenswrapper[4747]: I1215 05:50:53.770813 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-94b4f9f45-ng7wf"] Dec 15 05:50:53 crc kubenswrapper[4747]: I1215 05:50:53.809721 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55dc666865-9qwv7"] Dec 15 05:50:53 crc kubenswrapper[4747]: I1215 05:50:53.815059 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55dc666865-9qwv7"] Dec 15 05:50:54 crc kubenswrapper[4747]: I1215 05:50:54.641209 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ba5bb5e-fcb8-46c5-861c-b0e72dee5308" path="/var/lib/kubelet/pods/2ba5bb5e-fcb8-46c5-861c-b0e72dee5308/volumes" Dec 15 05:50:54 crc kubenswrapper[4747]: I1215 05:50:54.642169 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35d8d030-307c-47c1-97ce-b476cf1d3cf2" path="/var/lib/kubelet/pods/35d8d030-307c-47c1-97ce-b476cf1d3cf2/volumes" Dec 15 05:50:54 crc kubenswrapper[4747]: I1215 05:50:54.642554 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5217f8b-c2c7-4600-8137-b3367ec052ad" path="/var/lib/kubelet/pods/c5217f8b-c2c7-4600-8137-b3367ec052ad/volumes" Dec 15 05:50:54 crc kubenswrapper[4747]: I1215 05:50:54.648036 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"633ee263-eae2-4211-ae9e-d0efd7f7ac2f","Type":"ContainerStarted","Data":"534119aa255bd85e9401ca9d5ea74049b8ef74c5d88d01b0631e4ac9df75cff4"} Dec 15 05:50:54 crc kubenswrapper[4747]: I1215 05:50:54.650317 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-pl88m" event={"ID":"de8a2190-14e4-44fa-a3a7-18182a6b4df6","Type":"ContainerStarted","Data":"fae51cf78ed9966d59d3573957697a1bffa587fb4364411887cd9502eeb053f2"} Dec 15 05:50:54 crc kubenswrapper[4747]: I1215 05:50:54.653148 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-767d7fb4d9-hlwhp" event={"ID":"6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15","Type":"ContainerStarted","Data":"17150056ddd68644faad27afcf7d6c9c0738e87b6065aee0320ab58462e0fb91"} Dec 15 05:50:54 crc kubenswrapper[4747]: I1215 05:50:54.653290 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-767d7fb4d9-hlwhp" Dec 15 05:50:54 crc kubenswrapper[4747]: I1215 05:50:54.654835 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"d84a0b88-fbfb-4d28-89e0-5a64b4a1430f","Type":"ContainerStarted","Data":"f9ced6ff89573c950da06468f5aea70decf08d45162022d7ae9a18c74e1a5b92"} Dec 15 05:50:54 crc kubenswrapper[4747]: I1215 05:50:54.656198 4747 generic.go:334] "Generic (PLEG): container finished" podID="dca41dd5-5747-42a1-8703-30ae549342b7" containerID="bedaa94bcc2b2415f20de348cf5ecb236e62e4cde6a1f3a26fb2ae45e095b7e8" exitCode=0 Dec 15 05:50:54 crc kubenswrapper[4747]: I1215 05:50:54.656288 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jmz8h" event={"ID":"dca41dd5-5747-42a1-8703-30ae549342b7","Type":"ContainerDied","Data":"bedaa94bcc2b2415f20de348cf5ecb236e62e4cde6a1f3a26fb2ae45e095b7e8"} Dec 15 05:50:54 crc kubenswrapper[4747]: I1215 05:50:54.665694 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-pl88m" podStartSLOduration=2.697392026 podStartE2EDuration="23.665682461s" podCreationTimestamp="2025-12-15 05:50:31 +0000 UTC" firstStartedPulling="2025-12-15 05:50:32.443366558 +0000 UTC m=+796.139878475" lastFinishedPulling="2025-12-15 05:50:53.411656993 +0000 UTC m=+817.108168910" observedRunningTime="2025-12-15 05:50:54.662196084 +0000 UTC m=+818.358708001" watchObservedRunningTime="2025-12-15 05:50:54.665682461 +0000 UTC m=+818.362194378" Dec 15 05:50:54 crc kubenswrapper[4747]: I1215 05:50:54.682743 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=3.755494502 podStartE2EDuration="25.682724779s" podCreationTimestamp="2025-12-15 05:50:29 +0000 UTC" firstStartedPulling="2025-12-15 05:50:31.437236945 +0000 UTC m=+795.133748863" lastFinishedPulling="2025-12-15 05:50:53.364467223 +0000 UTC m=+817.060979140" observedRunningTime="2025-12-15 05:50:54.675516934 +0000 UTC m=+818.372028851" watchObservedRunningTime="2025-12-15 05:50:54.682724779 +0000 UTC m=+818.379236695" Dec 15 05:50:54 crc kubenswrapper[4747]: I1215 05:50:54.725132 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-767d7fb4d9-hlwhp" podStartSLOduration=6.091012404 podStartE2EDuration="23.725110955s" podCreationTimestamp="2025-12-15 05:50:31 +0000 UTC" firstStartedPulling="2025-12-15 05:50:35.727143456 +0000 UTC m=+799.423655374" lastFinishedPulling="2025-12-15 05:50:53.361242008 +0000 UTC m=+817.057753925" observedRunningTime="2025-12-15 05:50:54.719526674 +0000 UTC m=+818.416038591" watchObservedRunningTime="2025-12-15 05:50:54.725110955 +0000 UTC m=+818.421622872" Dec 15 05:50:54 crc kubenswrapper[4747]: I1215 05:50:54.809715 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Dec 15 05:50:55 crc kubenswrapper[4747]: I1215 05:50:55.056392 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d9886d5bf-r27p4"] Dec 15 05:50:55 crc kubenswrapper[4747]: I1215 05:50:55.091817 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78d59ccb8c-t4s8l"] Dec 15 05:50:55 crc kubenswrapper[4747]: I1215 05:50:55.093344 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78d59ccb8c-t4s8l" Dec 15 05:50:55 crc kubenswrapper[4747]: I1215 05:50:55.098231 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Dec 15 05:50:55 crc kubenswrapper[4747]: I1215 05:50:55.099789 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78d59ccb8c-t4s8l"] Dec 15 05:50:55 crc kubenswrapper[4747]: I1215 05:50:55.141281 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f2d45f0-0b7e-4950-b133-c3a27441c33a-ovsdbserver-nb\") pod \"dnsmasq-dns-78d59ccb8c-t4s8l\" (UID: \"5f2d45f0-0b7e-4950-b133-c3a27441c33a\") " pod="openstack/dnsmasq-dns-78d59ccb8c-t4s8l" Dec 15 05:50:55 crc kubenswrapper[4747]: I1215 05:50:55.141379 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f2d45f0-0b7e-4950-b133-c3a27441c33a-config\") pod \"dnsmasq-dns-78d59ccb8c-t4s8l\" (UID: \"5f2d45f0-0b7e-4950-b133-c3a27441c33a\") " pod="openstack/dnsmasq-dns-78d59ccb8c-t4s8l" Dec 15 05:50:55 crc kubenswrapper[4747]: I1215 05:50:55.141551 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f2d45f0-0b7e-4950-b133-c3a27441c33a-ovsdbserver-sb\") pod \"dnsmasq-dns-78d59ccb8c-t4s8l\" (UID: \"5f2d45f0-0b7e-4950-b133-c3a27441c33a\") " pod="openstack/dnsmasq-dns-78d59ccb8c-t4s8l" Dec 15 05:50:55 crc kubenswrapper[4747]: I1215 05:50:55.141590 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f2d45f0-0b7e-4950-b133-c3a27441c33a-dns-svc\") pod \"dnsmasq-dns-78d59ccb8c-t4s8l\" (UID: \"5f2d45f0-0b7e-4950-b133-c3a27441c33a\") " pod="openstack/dnsmasq-dns-78d59ccb8c-t4s8l" Dec 15 05:50:55 crc kubenswrapper[4747]: I1215 05:50:55.141617 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj8cr\" (UniqueName: \"kubernetes.io/projected/5f2d45f0-0b7e-4950-b133-c3a27441c33a-kube-api-access-mj8cr\") pod \"dnsmasq-dns-78d59ccb8c-t4s8l\" (UID: \"5f2d45f0-0b7e-4950-b133-c3a27441c33a\") " pod="openstack/dnsmasq-dns-78d59ccb8c-t4s8l" Dec 15 05:50:55 crc kubenswrapper[4747]: I1215 05:50:55.243890 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f2d45f0-0b7e-4950-b133-c3a27441c33a-ovsdbserver-sb\") pod \"dnsmasq-dns-78d59ccb8c-t4s8l\" (UID: \"5f2d45f0-0b7e-4950-b133-c3a27441c33a\") " pod="openstack/dnsmasq-dns-78d59ccb8c-t4s8l" Dec 15 05:50:55 crc kubenswrapper[4747]: I1215 05:50:55.243968 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f2d45f0-0b7e-4950-b133-c3a27441c33a-dns-svc\") pod \"dnsmasq-dns-78d59ccb8c-t4s8l\" (UID: \"5f2d45f0-0b7e-4950-b133-c3a27441c33a\") " pod="openstack/dnsmasq-dns-78d59ccb8c-t4s8l" Dec 15 05:50:55 crc kubenswrapper[4747]: I1215 05:50:55.243999 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj8cr\" (UniqueName: \"kubernetes.io/projected/5f2d45f0-0b7e-4950-b133-c3a27441c33a-kube-api-access-mj8cr\") pod \"dnsmasq-dns-78d59ccb8c-t4s8l\" (UID: \"5f2d45f0-0b7e-4950-b133-c3a27441c33a\") " pod="openstack/dnsmasq-dns-78d59ccb8c-t4s8l" Dec 15 05:50:55 crc kubenswrapper[4747]: I1215 05:50:55.244069 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f2d45f0-0b7e-4950-b133-c3a27441c33a-ovsdbserver-nb\") pod \"dnsmasq-dns-78d59ccb8c-t4s8l\" (UID: \"5f2d45f0-0b7e-4950-b133-c3a27441c33a\") " pod="openstack/dnsmasq-dns-78d59ccb8c-t4s8l" Dec 15 05:50:55 crc kubenswrapper[4747]: I1215 05:50:55.244156 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f2d45f0-0b7e-4950-b133-c3a27441c33a-config\") pod \"dnsmasq-dns-78d59ccb8c-t4s8l\" (UID: \"5f2d45f0-0b7e-4950-b133-c3a27441c33a\") " pod="openstack/dnsmasq-dns-78d59ccb8c-t4s8l" Dec 15 05:50:55 crc kubenswrapper[4747]: I1215 05:50:55.244806 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f2d45f0-0b7e-4950-b133-c3a27441c33a-ovsdbserver-sb\") pod \"dnsmasq-dns-78d59ccb8c-t4s8l\" (UID: \"5f2d45f0-0b7e-4950-b133-c3a27441c33a\") " pod="openstack/dnsmasq-dns-78d59ccb8c-t4s8l" Dec 15 05:50:55 crc kubenswrapper[4747]: I1215 05:50:55.245076 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f2d45f0-0b7e-4950-b133-c3a27441c33a-config\") pod \"dnsmasq-dns-78d59ccb8c-t4s8l\" (UID: \"5f2d45f0-0b7e-4950-b133-c3a27441c33a\") " pod="openstack/dnsmasq-dns-78d59ccb8c-t4s8l" Dec 15 05:50:55 crc kubenswrapper[4747]: I1215 05:50:55.245080 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f2d45f0-0b7e-4950-b133-c3a27441c33a-dns-svc\") pod \"dnsmasq-dns-78d59ccb8c-t4s8l\" (UID: \"5f2d45f0-0b7e-4950-b133-c3a27441c33a\") " pod="openstack/dnsmasq-dns-78d59ccb8c-t4s8l" Dec 15 05:50:55 crc kubenswrapper[4747]: I1215 05:50:55.245498 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f2d45f0-0b7e-4950-b133-c3a27441c33a-ovsdbserver-nb\") pod \"dnsmasq-dns-78d59ccb8c-t4s8l\" (UID: \"5f2d45f0-0b7e-4950-b133-c3a27441c33a\") " pod="openstack/dnsmasq-dns-78d59ccb8c-t4s8l" Dec 15 05:50:55 crc kubenswrapper[4747]: I1215 05:50:55.259754 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj8cr\" (UniqueName: \"kubernetes.io/projected/5f2d45f0-0b7e-4950-b133-c3a27441c33a-kube-api-access-mj8cr\") pod \"dnsmasq-dns-78d59ccb8c-t4s8l\" (UID: \"5f2d45f0-0b7e-4950-b133-c3a27441c33a\") " pod="openstack/dnsmasq-dns-78d59ccb8c-t4s8l" Dec 15 05:50:55 crc kubenswrapper[4747]: I1215 05:50:55.328628 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d9886d5bf-r27p4" Dec 15 05:50:55 crc kubenswrapper[4747]: I1215 05:50:55.417848 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78d59ccb8c-t4s8l" Dec 15 05:50:55 crc kubenswrapper[4747]: I1215 05:50:55.446278 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45ad3e3b-7313-4633-9da2-b644eefbad5a-dns-svc\") pod \"45ad3e3b-7313-4633-9da2-b644eefbad5a\" (UID: \"45ad3e3b-7313-4633-9da2-b644eefbad5a\") " Dec 15 05:50:55 crc kubenswrapper[4747]: I1215 05:50:55.446490 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45ad3e3b-7313-4633-9da2-b644eefbad5a-config\") pod \"45ad3e3b-7313-4633-9da2-b644eefbad5a\" (UID: \"45ad3e3b-7313-4633-9da2-b644eefbad5a\") " Dec 15 05:50:55 crc kubenswrapper[4747]: I1215 05:50:55.446535 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7chqg\" (UniqueName: \"kubernetes.io/projected/45ad3e3b-7313-4633-9da2-b644eefbad5a-kube-api-access-7chqg\") pod \"45ad3e3b-7313-4633-9da2-b644eefbad5a\" (UID: \"45ad3e3b-7313-4633-9da2-b644eefbad5a\") " Dec 15 05:50:55 crc kubenswrapper[4747]: I1215 05:50:55.446809 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45ad3e3b-7313-4633-9da2-b644eefbad5a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "45ad3e3b-7313-4633-9da2-b644eefbad5a" (UID: "45ad3e3b-7313-4633-9da2-b644eefbad5a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:50:55 crc kubenswrapper[4747]: I1215 05:50:55.447038 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45ad3e3b-7313-4633-9da2-b644eefbad5a-config" (OuterVolumeSpecName: "config") pod "45ad3e3b-7313-4633-9da2-b644eefbad5a" (UID: "45ad3e3b-7313-4633-9da2-b644eefbad5a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:50:55 crc kubenswrapper[4747]: I1215 05:50:55.449676 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45ad3e3b-7313-4633-9da2-b644eefbad5a-kube-api-access-7chqg" (OuterVolumeSpecName: "kube-api-access-7chqg") pod "45ad3e3b-7313-4633-9da2-b644eefbad5a" (UID: "45ad3e3b-7313-4633-9da2-b644eefbad5a"). InnerVolumeSpecName "kube-api-access-7chqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:50:55 crc kubenswrapper[4747]: I1215 05:50:55.547795 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45ad3e3b-7313-4633-9da2-b644eefbad5a-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:50:55 crc kubenswrapper[4747]: I1215 05:50:55.547829 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7chqg\" (UniqueName: \"kubernetes.io/projected/45ad3e3b-7313-4633-9da2-b644eefbad5a-kube-api-access-7chqg\") on node \"crc\" DevicePath \"\"" Dec 15 05:50:55 crc kubenswrapper[4747]: I1215 05:50:55.547841 4747 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45ad3e3b-7313-4633-9da2-b644eefbad5a-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 15 05:50:55 crc kubenswrapper[4747]: I1215 05:50:55.674021 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jmz8h" event={"ID":"dca41dd5-5747-42a1-8703-30ae549342b7","Type":"ContainerStarted","Data":"31d4734f50f69edd5c8c9c95eeffe960ef8daccf797ac7ceaeb61a630a380c53"} Dec 15 05:50:55 crc kubenswrapper[4747]: I1215 05:50:55.674329 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jmz8h" event={"ID":"dca41dd5-5747-42a1-8703-30ae549342b7","Type":"ContainerStarted","Data":"f4336fedeea7948ad066cfb2fb124d0dbec36999125a25491ccc592adf255546"} Dec 15 05:50:55 crc kubenswrapper[4747]: I1215 05:50:55.674595 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-jmz8h" Dec 15 05:50:55 crc kubenswrapper[4747]: I1215 05:50:55.675827 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"633ee263-eae2-4211-ae9e-d0efd7f7ac2f","Type":"ContainerStarted","Data":"03a91f150df2768c059504a3c81e0ce5f48e71b126d8a8b947b7e3f841399cd2"} Dec 15 05:50:55 crc kubenswrapper[4747]: I1215 05:50:55.678838 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d9886d5bf-r27p4" event={"ID":"45ad3e3b-7313-4633-9da2-b644eefbad5a","Type":"ContainerDied","Data":"dbedae2b06be0edf8fc7fd1be4b929dc0d5415adfdac7188989abe3b0acc7c2d"} Dec 15 05:50:55 crc kubenswrapper[4747]: I1215 05:50:55.678954 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d9886d5bf-r27p4" Dec 15 05:50:55 crc kubenswrapper[4747]: I1215 05:50:55.697976 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-jmz8h" podStartSLOduration=3.655270206 podStartE2EDuration="25.697962529s" podCreationTimestamp="2025-12-15 05:50:30 +0000 UTC" firstStartedPulling="2025-12-15 05:50:31.152497767 +0000 UTC m=+794.849009674" lastFinishedPulling="2025-12-15 05:50:53.19519009 +0000 UTC m=+816.891701997" observedRunningTime="2025-12-15 05:50:55.692374051 +0000 UTC m=+819.388885967" watchObservedRunningTime="2025-12-15 05:50:55.697962529 +0000 UTC m=+819.394474446" Dec 15 05:50:55 crc kubenswrapper[4747]: I1215 05:50:55.710526 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=22.260418497 podStartE2EDuration="23.710514543s" podCreationTimestamp="2025-12-15 05:50:32 +0000 UTC" firstStartedPulling="2025-12-15 05:50:53.025860397 +0000 UTC m=+816.722372315" lastFinishedPulling="2025-12-15 05:50:54.475956444 +0000 UTC m=+818.172468361" observedRunningTime="2025-12-15 05:50:55.710287026 +0000 UTC m=+819.406798943" watchObservedRunningTime="2025-12-15 05:50:55.710514543 +0000 UTC m=+819.407026460" Dec 15 05:50:55 crc kubenswrapper[4747]: I1215 05:50:55.741377 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d9886d5bf-r27p4"] Dec 15 05:50:55 crc kubenswrapper[4747]: I1215 05:50:55.746795 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d9886d5bf-r27p4"] Dec 15 05:50:55 crc kubenswrapper[4747]: I1215 05:50:55.796898 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78d59ccb8c-t4s8l"] Dec 15 05:50:55 crc kubenswrapper[4747]: I1215 05:50:55.809437 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Dec 15 05:50:56 crc kubenswrapper[4747]: I1215 05:50:56.640070 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45ad3e3b-7313-4633-9da2-b644eefbad5a" path="/var/lib/kubelet/pods/45ad3e3b-7313-4633-9da2-b644eefbad5a/volumes" Dec 15 05:50:56 crc kubenswrapper[4747]: I1215 05:50:56.687166 4747 generic.go:334] "Generic (PLEG): container finished" podID="5f2d45f0-0b7e-4950-b133-c3a27441c33a" containerID="b59c40f6273e1d192504f921eae61618104788c40d889190035a482a657317fd" exitCode=0 Dec 15 05:50:56 crc kubenswrapper[4747]: I1215 05:50:56.687255 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78d59ccb8c-t4s8l" event={"ID":"5f2d45f0-0b7e-4950-b133-c3a27441c33a","Type":"ContainerDied","Data":"b59c40f6273e1d192504f921eae61618104788c40d889190035a482a657317fd"} Dec 15 05:50:56 crc kubenswrapper[4747]: I1215 05:50:56.687285 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78d59ccb8c-t4s8l" event={"ID":"5f2d45f0-0b7e-4950-b133-c3a27441c33a","Type":"ContainerStarted","Data":"30b2b3d735632c6b7d9db92f559f377696d924427d047f7ef89cce82efdca384"} Dec 15 05:50:56 crc kubenswrapper[4747]: I1215 05:50:56.689621 4747 generic.go:334] "Generic (PLEG): container finished" podID="22da0dca-a59a-40f7-8dd2-95305eea5ee0" containerID="4eaa3a7c4a99a982618a4f9e048671e21efc050e772c65f71eb05045128057f0" exitCode=0 Dec 15 05:50:56 crc kubenswrapper[4747]: I1215 05:50:56.689743 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"22da0dca-a59a-40f7-8dd2-95305eea5ee0","Type":"ContainerDied","Data":"4eaa3a7c4a99a982618a4f9e048671e21efc050e772c65f71eb05045128057f0"} Dec 15 05:50:56 crc kubenswrapper[4747]: I1215 05:50:56.690298 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-jmz8h" Dec 15 05:50:57 crc kubenswrapper[4747]: I1215 05:50:57.699772 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"22da0dca-a59a-40f7-8dd2-95305eea5ee0","Type":"ContainerStarted","Data":"1866172fca438f2df5a4798c4205647884103640d0f8c9473d42d17a348e9fca"} Dec 15 05:50:57 crc kubenswrapper[4747]: I1215 05:50:57.701965 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78d59ccb8c-t4s8l" event={"ID":"5f2d45f0-0b7e-4950-b133-c3a27441c33a","Type":"ContainerStarted","Data":"566a100f4badeff2fff0f1ef1d1a34c66253287bdcfa3a957050a731bcfc5850"} Dec 15 05:50:57 crc kubenswrapper[4747]: I1215 05:50:57.702452 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78d59ccb8c-t4s8l" Dec 15 05:50:57 crc kubenswrapper[4747]: I1215 05:50:57.719559 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=7.381133671 podStartE2EDuration="35.719541891s" podCreationTimestamp="2025-12-15 05:50:22 +0000 UTC" firstStartedPulling="2025-12-15 05:50:24.855863993 +0000 UTC m=+788.552375911" lastFinishedPulling="2025-12-15 05:50:53.194272214 +0000 UTC m=+816.890784131" observedRunningTime="2025-12-15 05:50:57.717959426 +0000 UTC m=+821.414471353" watchObservedRunningTime="2025-12-15 05:50:57.719541891 +0000 UTC m=+821.416053809" Dec 15 05:50:57 crc kubenswrapper[4747]: I1215 05:50:57.736694 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78d59ccb8c-t4s8l" podStartSLOduration=2.736676523 podStartE2EDuration="2.736676523s" podCreationTimestamp="2025-12-15 05:50:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:50:57.734105499 +0000 UTC m=+821.430617415" watchObservedRunningTime="2025-12-15 05:50:57.736676523 +0000 UTC m=+821.433188440" Dec 15 05:50:57 crc kubenswrapper[4747]: I1215 05:50:57.842393 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Dec 15 05:50:58 crc kubenswrapper[4747]: I1215 05:50:58.101870 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Dec 15 05:50:58 crc kubenswrapper[4747]: I1215 05:50:58.129264 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Dec 15 05:50:58 crc kubenswrapper[4747]: I1215 05:50:58.717859 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Dec 15 05:50:58 crc kubenswrapper[4747]: I1215 05:50:58.748984 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Dec 15 05:50:58 crc kubenswrapper[4747]: I1215 05:50:58.865051 4747 patch_prober.go:28] interesting pod/machine-config-daemon-nldtn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 15 05:50:58 crc kubenswrapper[4747]: I1215 05:50:58.865339 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 15 05:50:59 crc kubenswrapper[4747]: I1215 05:50:59.136185 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Dec 15 05:50:59 crc kubenswrapper[4747]: I1215 05:50:59.242726 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Dec 15 05:50:59 crc kubenswrapper[4747]: I1215 05:50:59.244016 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 15 05:50:59 crc kubenswrapper[4747]: I1215 05:50:59.245284 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Dec 15 05:50:59 crc kubenswrapper[4747]: I1215 05:50:59.246174 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 15 05:50:59 crc kubenswrapper[4747]: I1215 05:50:59.246300 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 15 05:50:59 crc kubenswrapper[4747]: I1215 05:50:59.246711 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-dtr69" Dec 15 05:50:59 crc kubenswrapper[4747]: I1215 05:50:59.263062 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 15 05:50:59 crc kubenswrapper[4747]: I1215 05:50:59.413245 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9efeed7-bf14-463d-829f-b3e95d8323b2-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"c9efeed7-bf14-463d-829f-b3e95d8323b2\") " pod="openstack/ovn-northd-0" Dec 15 05:50:59 crc kubenswrapper[4747]: I1215 05:50:59.413296 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9efeed7-bf14-463d-829f-b3e95d8323b2-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"c9efeed7-bf14-463d-829f-b3e95d8323b2\") " pod="openstack/ovn-northd-0" Dec 15 05:50:59 crc kubenswrapper[4747]: I1215 05:50:59.413335 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c9efeed7-bf14-463d-829f-b3e95d8323b2-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"c9efeed7-bf14-463d-829f-b3e95d8323b2\") " pod="openstack/ovn-northd-0" Dec 15 05:50:59 crc kubenswrapper[4747]: I1215 05:50:59.413358 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9efeed7-bf14-463d-829f-b3e95d8323b2-config\") pod \"ovn-northd-0\" (UID: \"c9efeed7-bf14-463d-829f-b3e95d8323b2\") " pod="openstack/ovn-northd-0" Dec 15 05:50:59 crc kubenswrapper[4747]: I1215 05:50:59.413752 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9efeed7-bf14-463d-829f-b3e95d8323b2-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"c9efeed7-bf14-463d-829f-b3e95d8323b2\") " pod="openstack/ovn-northd-0" Dec 15 05:50:59 crc kubenswrapper[4747]: I1215 05:50:59.413822 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9efeed7-bf14-463d-829f-b3e95d8323b2-scripts\") pod \"ovn-northd-0\" (UID: \"c9efeed7-bf14-463d-829f-b3e95d8323b2\") " pod="openstack/ovn-northd-0" Dec 15 05:50:59 crc kubenswrapper[4747]: I1215 05:50:59.414003 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzgqm\" (UniqueName: \"kubernetes.io/projected/c9efeed7-bf14-463d-829f-b3e95d8323b2-kube-api-access-jzgqm\") pod \"ovn-northd-0\" (UID: \"c9efeed7-bf14-463d-829f-b3e95d8323b2\") " pod="openstack/ovn-northd-0" Dec 15 05:50:59 crc kubenswrapper[4747]: I1215 05:50:59.516426 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9efeed7-bf14-463d-829f-b3e95d8323b2-scripts\") pod \"ovn-northd-0\" (UID: \"c9efeed7-bf14-463d-829f-b3e95d8323b2\") " pod="openstack/ovn-northd-0" Dec 15 05:50:59 crc kubenswrapper[4747]: I1215 05:50:59.516482 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzgqm\" (UniqueName: \"kubernetes.io/projected/c9efeed7-bf14-463d-829f-b3e95d8323b2-kube-api-access-jzgqm\") pod \"ovn-northd-0\" (UID: \"c9efeed7-bf14-463d-829f-b3e95d8323b2\") " pod="openstack/ovn-northd-0" Dec 15 05:50:59 crc kubenswrapper[4747]: I1215 05:50:59.516528 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9efeed7-bf14-463d-829f-b3e95d8323b2-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"c9efeed7-bf14-463d-829f-b3e95d8323b2\") " pod="openstack/ovn-northd-0" Dec 15 05:50:59 crc kubenswrapper[4747]: I1215 05:50:59.516573 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9efeed7-bf14-463d-829f-b3e95d8323b2-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"c9efeed7-bf14-463d-829f-b3e95d8323b2\") " pod="openstack/ovn-northd-0" Dec 15 05:50:59 crc kubenswrapper[4747]: I1215 05:50:59.516626 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c9efeed7-bf14-463d-829f-b3e95d8323b2-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"c9efeed7-bf14-463d-829f-b3e95d8323b2\") " pod="openstack/ovn-northd-0" Dec 15 05:50:59 crc kubenswrapper[4747]: I1215 05:50:59.516662 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9efeed7-bf14-463d-829f-b3e95d8323b2-config\") pod \"ovn-northd-0\" (UID: \"c9efeed7-bf14-463d-829f-b3e95d8323b2\") " pod="openstack/ovn-northd-0" Dec 15 05:50:59 crc kubenswrapper[4747]: I1215 05:50:59.516808 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9efeed7-bf14-463d-829f-b3e95d8323b2-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"c9efeed7-bf14-463d-829f-b3e95d8323b2\") " pod="openstack/ovn-northd-0" Dec 15 05:50:59 crc kubenswrapper[4747]: I1215 05:50:59.517621 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c9efeed7-bf14-463d-829f-b3e95d8323b2-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"c9efeed7-bf14-463d-829f-b3e95d8323b2\") " pod="openstack/ovn-northd-0" Dec 15 05:50:59 crc kubenswrapper[4747]: I1215 05:50:59.518069 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9efeed7-bf14-463d-829f-b3e95d8323b2-scripts\") pod \"ovn-northd-0\" (UID: \"c9efeed7-bf14-463d-829f-b3e95d8323b2\") " pod="openstack/ovn-northd-0" Dec 15 05:50:59 crc kubenswrapper[4747]: I1215 05:50:59.518520 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9efeed7-bf14-463d-829f-b3e95d8323b2-config\") pod \"ovn-northd-0\" (UID: \"c9efeed7-bf14-463d-829f-b3e95d8323b2\") " pod="openstack/ovn-northd-0" Dec 15 05:50:59 crc kubenswrapper[4747]: I1215 05:50:59.524042 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9efeed7-bf14-463d-829f-b3e95d8323b2-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"c9efeed7-bf14-463d-829f-b3e95d8323b2\") " pod="openstack/ovn-northd-0" Dec 15 05:50:59 crc kubenswrapper[4747]: I1215 05:50:59.525055 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9efeed7-bf14-463d-829f-b3e95d8323b2-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"c9efeed7-bf14-463d-829f-b3e95d8323b2\") " pod="openstack/ovn-northd-0" Dec 15 05:50:59 crc kubenswrapper[4747]: I1215 05:50:59.525996 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9efeed7-bf14-463d-829f-b3e95d8323b2-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"c9efeed7-bf14-463d-829f-b3e95d8323b2\") " pod="openstack/ovn-northd-0" Dec 15 05:50:59 crc kubenswrapper[4747]: I1215 05:50:59.536053 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzgqm\" (UniqueName: \"kubernetes.io/projected/c9efeed7-bf14-463d-829f-b3e95d8323b2-kube-api-access-jzgqm\") pod \"ovn-northd-0\" (UID: \"c9efeed7-bf14-463d-829f-b3e95d8323b2\") " pod="openstack/ovn-northd-0" Dec 15 05:50:59 crc kubenswrapper[4747]: I1215 05:50:59.572584 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 15 05:50:59 crc kubenswrapper[4747]: I1215 05:50:59.986253 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 15 05:50:59 crc kubenswrapper[4747]: W1215 05:50:59.989132 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9efeed7_bf14_463d_829f_b3e95d8323b2.slice/crio-1c1d322b5c0346e72ae2898f7c733ea615af2a62792531b3d9560a0bba2f894d WatchSource:0}: Error finding container 1c1d322b5c0346e72ae2898f7c733ea615af2a62792531b3d9560a0bba2f894d: Status 404 returned error can't find the container with id 1c1d322b5c0346e72ae2898f7c733ea615af2a62792531b3d9560a0bba2f894d Dec 15 05:51:00 crc kubenswrapper[4747]: I1215 05:51:00.735441 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c9efeed7-bf14-463d-829f-b3e95d8323b2","Type":"ContainerStarted","Data":"1c1d322b5c0346e72ae2898f7c733ea615af2a62792531b3d9560a0bba2f894d"} Dec 15 05:51:01 crc kubenswrapper[4747]: I1215 05:51:01.746383 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c9efeed7-bf14-463d-829f-b3e95d8323b2","Type":"ContainerStarted","Data":"f6b492a782bef54a0e3131d9301513f2942b7870afef4264c17dbfa56923ecc4"} Dec 15 05:51:01 crc kubenswrapper[4747]: I1215 05:51:01.746676 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c9efeed7-bf14-463d-829f-b3e95d8323b2","Type":"ContainerStarted","Data":"0735ff81cda99ab96f0d253fbc73f14e0865625aa387d90746398c367ecc2a2e"} Dec 15 05:51:01 crc kubenswrapper[4747]: I1215 05:51:01.746882 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 15 05:51:01 crc kubenswrapper[4747]: I1215 05:51:01.766858 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.424390753 podStartE2EDuration="2.766834761s" podCreationTimestamp="2025-12-15 05:50:59 +0000 UTC" firstStartedPulling="2025-12-15 05:50:59.99156988 +0000 UTC m=+823.688081797" lastFinishedPulling="2025-12-15 05:51:01.334013888 +0000 UTC m=+825.030525805" observedRunningTime="2025-12-15 05:51:01.764370076 +0000 UTC m=+825.460881993" watchObservedRunningTime="2025-12-15 05:51:01.766834761 +0000 UTC m=+825.463346678" Dec 15 05:51:02 crc kubenswrapper[4747]: I1215 05:51:02.190032 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-767d7fb4d9-hlwhp" Dec 15 05:51:03 crc kubenswrapper[4747]: I1215 05:51:03.762350 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9bece5e6-b345-4969-a563-81fb3706f8f1","Type":"ContainerStarted","Data":"7e5ab965190fd6a4bb7f0f426cc015cca7ab231dca777f9c018ffe750cf11f57"} Dec 15 05:51:03 crc kubenswrapper[4747]: I1215 05:51:03.764736 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"65a53faf-94ad-48f3-b8e0-8642376f89ee","Type":"ContainerStarted","Data":"8e013d9a657de63787a61a2c6aea79b4254a3e35c2c761dd928d98d5ed13bf52"} Dec 15 05:51:04 crc kubenswrapper[4747]: I1215 05:51:04.264116 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 15 05:51:04 crc kubenswrapper[4747]: I1215 05:51:04.264175 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 15 05:51:04 crc kubenswrapper[4747]: I1215 05:51:04.322080 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Dec 15 05:51:04 crc kubenswrapper[4747]: I1215 05:51:04.785351 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"8828a0c4-9d91-45ba-a6f7-3bd720a9596b","Type":"ContainerStarted","Data":"ebe84709f5783f36baf2af713787d3c1db13ac87e689e9e86555d3f6a4b6e2cd"} Dec 15 05:51:04 crc kubenswrapper[4747]: I1215 05:51:04.785831 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Dec 15 05:51:04 crc kubenswrapper[4747]: I1215 05:51:04.808769 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.03133436 podStartE2EDuration="40.808750833s" podCreationTimestamp="2025-12-15 05:50:24 +0000 UTC" firstStartedPulling="2025-12-15 05:50:25.515018715 +0000 UTC m=+789.211530632" lastFinishedPulling="2025-12-15 05:51:04.292435188 +0000 UTC m=+827.988947105" observedRunningTime="2025-12-15 05:51:04.802014735 +0000 UTC m=+828.498526652" watchObservedRunningTime="2025-12-15 05:51:04.808750833 +0000 UTC m=+828.505262749" Dec 15 05:51:04 crc kubenswrapper[4747]: I1215 05:51:04.843552 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Dec 15 05:51:05 crc kubenswrapper[4747]: I1215 05:51:05.420158 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78d59ccb8c-t4s8l" Dec 15 05:51:05 crc kubenswrapper[4747]: I1215 05:51:05.474854 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-767d7fb4d9-hlwhp"] Dec 15 05:51:05 crc kubenswrapper[4747]: I1215 05:51:05.475123 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-767d7fb4d9-hlwhp" podUID="6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15" containerName="dnsmasq-dns" containerID="cri-o://17150056ddd68644faad27afcf7d6c9c0738e87b6065aee0320ab58462e0fb91" gracePeriod=10 Dec 15 05:51:05 crc kubenswrapper[4747]: I1215 05:51:05.796362 4747 generic.go:334] "Generic (PLEG): container finished" podID="6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15" containerID="17150056ddd68644faad27afcf7d6c9c0738e87b6065aee0320ab58462e0fb91" exitCode=0 Dec 15 05:51:05 crc kubenswrapper[4747]: I1215 05:51:05.796871 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-767d7fb4d9-hlwhp" event={"ID":"6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15","Type":"ContainerDied","Data":"17150056ddd68644faad27afcf7d6c9c0738e87b6065aee0320ab58462e0fb91"} Dec 15 05:51:05 crc kubenswrapper[4747]: I1215 05:51:05.877258 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-767d7fb4d9-hlwhp" Dec 15 05:51:05 crc kubenswrapper[4747]: I1215 05:51:05.922640 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15-config\") pod \"6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15\" (UID: \"6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15\") " Dec 15 05:51:05 crc kubenswrapper[4747]: I1215 05:51:05.922736 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15-dns-svc\") pod \"6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15\" (UID: \"6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15\") " Dec 15 05:51:05 crc kubenswrapper[4747]: I1215 05:51:05.922847 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15-ovsdbserver-nb\") pod \"6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15\" (UID: \"6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15\") " Dec 15 05:51:05 crc kubenswrapper[4747]: I1215 05:51:05.922986 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hksl\" (UniqueName: \"kubernetes.io/projected/6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15-kube-api-access-5hksl\") pod \"6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15\" (UID: \"6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15\") " Dec 15 05:51:05 crc kubenswrapper[4747]: I1215 05:51:05.927071 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15-kube-api-access-5hksl" (OuterVolumeSpecName: "kube-api-access-5hksl") pod "6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15" (UID: "6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15"). InnerVolumeSpecName "kube-api-access-5hksl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:51:05 crc kubenswrapper[4747]: I1215 05:51:05.948391 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15" (UID: "6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:51:05 crc kubenswrapper[4747]: I1215 05:51:05.951742 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15-config" (OuterVolumeSpecName: "config") pod "6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15" (UID: "6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:51:05 crc kubenswrapper[4747]: I1215 05:51:05.953518 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15" (UID: "6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:51:06 crc kubenswrapper[4747]: I1215 05:51:06.024139 4747 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:06 crc kubenswrapper[4747]: I1215 05:51:06.024168 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hksl\" (UniqueName: \"kubernetes.io/projected/6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15-kube-api-access-5hksl\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:06 crc kubenswrapper[4747]: I1215 05:51:06.024183 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:06 crc kubenswrapper[4747]: I1215 05:51:06.024193 4747 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:06 crc kubenswrapper[4747]: I1215 05:51:06.807693 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f242c5ef-84fc-4437-86a0-0175e8ea123b","Type":"ContainerStarted","Data":"a5d73487a26cbb0cdaddf0c3b5380882777d9236b351d2a56d8f98623b298beb"} Dec 15 05:51:06 crc kubenswrapper[4747]: I1215 05:51:06.810940 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-767d7fb4d9-hlwhp" event={"ID":"6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15","Type":"ContainerDied","Data":"dde568c464b88fca4f2ff2be787c1921d1925e2d4e6e23c94748071de6389419"} Dec 15 05:51:06 crc kubenswrapper[4747]: I1215 05:51:06.811011 4747 scope.go:117] "RemoveContainer" containerID="17150056ddd68644faad27afcf7d6c9c0738e87b6065aee0320ab58462e0fb91" Dec 15 05:51:06 crc kubenswrapper[4747]: I1215 05:51:06.811154 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-767d7fb4d9-hlwhp" Dec 15 05:51:06 crc kubenswrapper[4747]: I1215 05:51:06.828760 4747 scope.go:117] "RemoveContainer" containerID="c7d3d549eab1de97c6dacd7882eb2effec271fdb15fb17549a9375c222cbc8c7" Dec 15 05:51:06 crc kubenswrapper[4747]: I1215 05:51:06.859872 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-767d7fb4d9-hlwhp"] Dec 15 05:51:06 crc kubenswrapper[4747]: I1215 05:51:06.865090 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-767d7fb4d9-hlwhp"] Dec 15 05:51:08 crc kubenswrapper[4747]: I1215 05:51:08.639666 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15" path="/var/lib/kubelet/pods/6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15/volumes" Dec 15 05:51:08 crc kubenswrapper[4747]: I1215 05:51:08.829376 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-b65n4" event={"ID":"becaa3b6-8cd5-4e55-9a81-0a21fec0a70b","Type":"ContainerStarted","Data":"e937ecb13d0a528ea55405c815b197c728aa5beef9169d51b5c846bea4826b38"} Dec 15 05:51:08 crc kubenswrapper[4747]: I1215 05:51:08.829642 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-b65n4" Dec 15 05:51:08 crc kubenswrapper[4747]: I1215 05:51:08.832442 4747 generic.go:334] "Generic (PLEG): container finished" podID="f242c5ef-84fc-4437-86a0-0175e8ea123b" containerID="a5d73487a26cbb0cdaddf0c3b5380882777d9236b351d2a56d8f98623b298beb" exitCode=0 Dec 15 05:51:08 crc kubenswrapper[4747]: I1215 05:51:08.832503 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f242c5ef-84fc-4437-86a0-0175e8ea123b","Type":"ContainerDied","Data":"a5d73487a26cbb0cdaddf0c3b5380882777d9236b351d2a56d8f98623b298beb"} Dec 15 05:51:08 crc kubenswrapper[4747]: I1215 05:51:08.854332 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-b65n4" podStartSLOduration=1.5511031 podStartE2EDuration="38.854316704s" podCreationTimestamp="2025-12-15 05:50:30 +0000 UTC" firstStartedPulling="2025-12-15 05:50:30.984666602 +0000 UTC m=+794.681178520" lastFinishedPulling="2025-12-15 05:51:08.287880208 +0000 UTC m=+831.984392124" observedRunningTime="2025-12-15 05:51:08.848402051 +0000 UTC m=+832.544913969" watchObservedRunningTime="2025-12-15 05:51:08.854316704 +0000 UTC m=+832.550828611" Dec 15 05:51:09 crc kubenswrapper[4747]: I1215 05:51:09.842772 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f242c5ef-84fc-4437-86a0-0175e8ea123b","Type":"ContainerStarted","Data":"9b942148014a708763791fd8afe7f136ef95fed0d663ca36397a86324ef5a816"} Dec 15 05:51:09 crc kubenswrapper[4747]: I1215 05:51:09.861999 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371987.992811 podStartE2EDuration="48.86196368s" podCreationTimestamp="2025-12-15 05:50:21 +0000 UTC" firstStartedPulling="2025-12-15 05:50:23.447915114 +0000 UTC m=+787.144427032" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:51:09.86079355 +0000 UTC m=+833.557305457" watchObservedRunningTime="2025-12-15 05:51:09.86196368 +0000 UTC m=+833.558475597" Dec 15 05:51:10 crc kubenswrapper[4747]: I1215 05:51:10.092788 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Dec 15 05:51:13 crc kubenswrapper[4747]: I1215 05:51:13.017472 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 15 05:51:13 crc kubenswrapper[4747]: I1215 05:51:13.018427 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 15 05:51:14 crc kubenswrapper[4747]: I1215 05:51:14.619061 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Dec 15 05:51:15 crc kubenswrapper[4747]: I1215 05:51:15.167610 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 15 05:51:15 crc kubenswrapper[4747]: I1215 05:51:15.235070 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 15 05:51:16 crc kubenswrapper[4747]: I1215 05:51:16.433880 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67fb8b8965-6vtwl"] Dec 15 05:51:16 crc kubenswrapper[4747]: E1215 05:51:16.434254 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15" containerName="init" Dec 15 05:51:16 crc kubenswrapper[4747]: I1215 05:51:16.434269 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15" containerName="init" Dec 15 05:51:16 crc kubenswrapper[4747]: E1215 05:51:16.434291 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15" containerName="dnsmasq-dns" Dec 15 05:51:16 crc kubenswrapper[4747]: I1215 05:51:16.434299 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15" containerName="dnsmasq-dns" Dec 15 05:51:16 crc kubenswrapper[4747]: I1215 05:51:16.434458 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dce00fa-ec1d-4bba-a8b2-a0b41ce4ad15" containerName="dnsmasq-dns" Dec 15 05:51:16 crc kubenswrapper[4747]: I1215 05:51:16.435312 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fb8b8965-6vtwl" Dec 15 05:51:16 crc kubenswrapper[4747]: I1215 05:51:16.447267 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67fb8b8965-6vtwl"] Dec 15 05:51:16 crc kubenswrapper[4747]: I1215 05:51:16.606877 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bade9597-335c-43a4-9477-ab4f08999fa8-ovsdbserver-sb\") pod \"dnsmasq-dns-67fb8b8965-6vtwl\" (UID: \"bade9597-335c-43a4-9477-ab4f08999fa8\") " pod="openstack/dnsmasq-dns-67fb8b8965-6vtwl" Dec 15 05:51:16 crc kubenswrapper[4747]: I1215 05:51:16.607295 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bade9597-335c-43a4-9477-ab4f08999fa8-dns-svc\") pod \"dnsmasq-dns-67fb8b8965-6vtwl\" (UID: \"bade9597-335c-43a4-9477-ab4f08999fa8\") " pod="openstack/dnsmasq-dns-67fb8b8965-6vtwl" Dec 15 05:51:16 crc kubenswrapper[4747]: I1215 05:51:16.607418 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tt89\" (UniqueName: \"kubernetes.io/projected/bade9597-335c-43a4-9477-ab4f08999fa8-kube-api-access-4tt89\") pod \"dnsmasq-dns-67fb8b8965-6vtwl\" (UID: \"bade9597-335c-43a4-9477-ab4f08999fa8\") " pod="openstack/dnsmasq-dns-67fb8b8965-6vtwl" Dec 15 05:51:16 crc kubenswrapper[4747]: I1215 05:51:16.607444 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bade9597-335c-43a4-9477-ab4f08999fa8-ovsdbserver-nb\") pod \"dnsmasq-dns-67fb8b8965-6vtwl\" (UID: \"bade9597-335c-43a4-9477-ab4f08999fa8\") " pod="openstack/dnsmasq-dns-67fb8b8965-6vtwl" Dec 15 05:51:16 crc kubenswrapper[4747]: I1215 05:51:16.607467 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bade9597-335c-43a4-9477-ab4f08999fa8-config\") pod \"dnsmasq-dns-67fb8b8965-6vtwl\" (UID: \"bade9597-335c-43a4-9477-ab4f08999fa8\") " pod="openstack/dnsmasq-dns-67fb8b8965-6vtwl" Dec 15 05:51:16 crc kubenswrapper[4747]: I1215 05:51:16.709325 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bade9597-335c-43a4-9477-ab4f08999fa8-ovsdbserver-sb\") pod \"dnsmasq-dns-67fb8b8965-6vtwl\" (UID: \"bade9597-335c-43a4-9477-ab4f08999fa8\") " pod="openstack/dnsmasq-dns-67fb8b8965-6vtwl" Dec 15 05:51:16 crc kubenswrapper[4747]: I1215 05:51:16.709393 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bade9597-335c-43a4-9477-ab4f08999fa8-dns-svc\") pod \"dnsmasq-dns-67fb8b8965-6vtwl\" (UID: \"bade9597-335c-43a4-9477-ab4f08999fa8\") " pod="openstack/dnsmasq-dns-67fb8b8965-6vtwl" Dec 15 05:51:16 crc kubenswrapper[4747]: I1215 05:51:16.709464 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tt89\" (UniqueName: \"kubernetes.io/projected/bade9597-335c-43a4-9477-ab4f08999fa8-kube-api-access-4tt89\") pod \"dnsmasq-dns-67fb8b8965-6vtwl\" (UID: \"bade9597-335c-43a4-9477-ab4f08999fa8\") " pod="openstack/dnsmasq-dns-67fb8b8965-6vtwl" Dec 15 05:51:16 crc kubenswrapper[4747]: I1215 05:51:16.709500 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bade9597-335c-43a4-9477-ab4f08999fa8-ovsdbserver-nb\") pod \"dnsmasq-dns-67fb8b8965-6vtwl\" (UID: \"bade9597-335c-43a4-9477-ab4f08999fa8\") " pod="openstack/dnsmasq-dns-67fb8b8965-6vtwl" Dec 15 05:51:16 crc kubenswrapper[4747]: I1215 05:51:16.709533 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bade9597-335c-43a4-9477-ab4f08999fa8-config\") pod \"dnsmasq-dns-67fb8b8965-6vtwl\" (UID: \"bade9597-335c-43a4-9477-ab4f08999fa8\") " pod="openstack/dnsmasq-dns-67fb8b8965-6vtwl" Dec 15 05:51:16 crc kubenswrapper[4747]: I1215 05:51:16.710631 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bade9597-335c-43a4-9477-ab4f08999fa8-ovsdbserver-nb\") pod \"dnsmasq-dns-67fb8b8965-6vtwl\" (UID: \"bade9597-335c-43a4-9477-ab4f08999fa8\") " pod="openstack/dnsmasq-dns-67fb8b8965-6vtwl" Dec 15 05:51:16 crc kubenswrapper[4747]: I1215 05:51:16.710803 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bade9597-335c-43a4-9477-ab4f08999fa8-config\") pod \"dnsmasq-dns-67fb8b8965-6vtwl\" (UID: \"bade9597-335c-43a4-9477-ab4f08999fa8\") " pod="openstack/dnsmasq-dns-67fb8b8965-6vtwl" Dec 15 05:51:16 crc kubenswrapper[4747]: I1215 05:51:16.710951 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bade9597-335c-43a4-9477-ab4f08999fa8-dns-svc\") pod \"dnsmasq-dns-67fb8b8965-6vtwl\" (UID: \"bade9597-335c-43a4-9477-ab4f08999fa8\") " pod="openstack/dnsmasq-dns-67fb8b8965-6vtwl" Dec 15 05:51:16 crc kubenswrapper[4747]: I1215 05:51:16.711308 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bade9597-335c-43a4-9477-ab4f08999fa8-ovsdbserver-sb\") pod \"dnsmasq-dns-67fb8b8965-6vtwl\" (UID: \"bade9597-335c-43a4-9477-ab4f08999fa8\") " pod="openstack/dnsmasq-dns-67fb8b8965-6vtwl" Dec 15 05:51:16 crc kubenswrapper[4747]: I1215 05:51:16.728616 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tt89\" (UniqueName: \"kubernetes.io/projected/bade9597-335c-43a4-9477-ab4f08999fa8-kube-api-access-4tt89\") pod \"dnsmasq-dns-67fb8b8965-6vtwl\" (UID: \"bade9597-335c-43a4-9477-ab4f08999fa8\") " pod="openstack/dnsmasq-dns-67fb8b8965-6vtwl" Dec 15 05:51:16 crc kubenswrapper[4747]: I1215 05:51:16.756558 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fb8b8965-6vtwl" Dec 15 05:51:17 crc kubenswrapper[4747]: I1215 05:51:17.106072 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67fb8b8965-6vtwl"] Dec 15 05:51:17 crc kubenswrapper[4747]: W1215 05:51:17.110993 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbade9597_335c_43a4_9477_ab4f08999fa8.slice/crio-03644ad93afaa3dd9ce7257f50b3b316cef34cb35104dbf4127b5f663cb61414 WatchSource:0}: Error finding container 03644ad93afaa3dd9ce7257f50b3b316cef34cb35104dbf4127b5f663cb61414: Status 404 returned error can't find the container with id 03644ad93afaa3dd9ce7257f50b3b316cef34cb35104dbf4127b5f663cb61414 Dec 15 05:51:17 crc kubenswrapper[4747]: I1215 05:51:17.631948 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Dec 15 05:51:17 crc kubenswrapper[4747]: I1215 05:51:17.637152 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 15 05:51:17 crc kubenswrapper[4747]: I1215 05:51:17.638831 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Dec 15 05:51:17 crc kubenswrapper[4747]: I1215 05:51:17.639394 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Dec 15 05:51:17 crc kubenswrapper[4747]: I1215 05:51:17.639435 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Dec 15 05:51:17 crc kubenswrapper[4747]: I1215 05:51:17.643491 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-mhwc7" Dec 15 05:51:17 crc kubenswrapper[4747]: I1215 05:51:17.658521 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 15 05:51:17 crc kubenswrapper[4747]: I1215 05:51:17.735812 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/08c6df63-e1b2-4194-9bbe-b07410de16e7-etc-swift\") pod \"swift-storage-0\" (UID: \"08c6df63-e1b2-4194-9bbe-b07410de16e7\") " pod="openstack/swift-storage-0" Dec 15 05:51:17 crc kubenswrapper[4747]: I1215 05:51:17.735887 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk4lp\" (UniqueName: \"kubernetes.io/projected/08c6df63-e1b2-4194-9bbe-b07410de16e7-kube-api-access-kk4lp\") pod \"swift-storage-0\" (UID: \"08c6df63-e1b2-4194-9bbe-b07410de16e7\") " pod="openstack/swift-storage-0" Dec 15 05:51:17 crc kubenswrapper[4747]: I1215 05:51:17.735936 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/08c6df63-e1b2-4194-9bbe-b07410de16e7-lock\") pod \"swift-storage-0\" (UID: \"08c6df63-e1b2-4194-9bbe-b07410de16e7\") " pod="openstack/swift-storage-0" Dec 15 05:51:17 crc kubenswrapper[4747]: I1215 05:51:17.735990 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/08c6df63-e1b2-4194-9bbe-b07410de16e7-cache\") pod \"swift-storage-0\" (UID: \"08c6df63-e1b2-4194-9bbe-b07410de16e7\") " pod="openstack/swift-storage-0" Dec 15 05:51:17 crc kubenswrapper[4747]: I1215 05:51:17.736133 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"08c6df63-e1b2-4194-9bbe-b07410de16e7\") " pod="openstack/swift-storage-0" Dec 15 05:51:17 crc kubenswrapper[4747]: I1215 05:51:17.837790 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk4lp\" (UniqueName: \"kubernetes.io/projected/08c6df63-e1b2-4194-9bbe-b07410de16e7-kube-api-access-kk4lp\") pod \"swift-storage-0\" (UID: \"08c6df63-e1b2-4194-9bbe-b07410de16e7\") " pod="openstack/swift-storage-0" Dec 15 05:51:17 crc kubenswrapper[4747]: I1215 05:51:17.837841 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/08c6df63-e1b2-4194-9bbe-b07410de16e7-lock\") pod \"swift-storage-0\" (UID: \"08c6df63-e1b2-4194-9bbe-b07410de16e7\") " pod="openstack/swift-storage-0" Dec 15 05:51:17 crc kubenswrapper[4747]: I1215 05:51:17.837895 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/08c6df63-e1b2-4194-9bbe-b07410de16e7-cache\") pod \"swift-storage-0\" (UID: \"08c6df63-e1b2-4194-9bbe-b07410de16e7\") " pod="openstack/swift-storage-0" Dec 15 05:51:17 crc kubenswrapper[4747]: I1215 05:51:17.837920 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"08c6df63-e1b2-4194-9bbe-b07410de16e7\") " pod="openstack/swift-storage-0" Dec 15 05:51:17 crc kubenswrapper[4747]: I1215 05:51:17.838003 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/08c6df63-e1b2-4194-9bbe-b07410de16e7-etc-swift\") pod \"swift-storage-0\" (UID: \"08c6df63-e1b2-4194-9bbe-b07410de16e7\") " pod="openstack/swift-storage-0" Dec 15 05:51:17 crc kubenswrapper[4747]: E1215 05:51:17.838169 4747 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 15 05:51:17 crc kubenswrapper[4747]: E1215 05:51:17.838189 4747 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 15 05:51:17 crc kubenswrapper[4747]: E1215 05:51:17.838246 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/08c6df63-e1b2-4194-9bbe-b07410de16e7-etc-swift podName:08c6df63-e1b2-4194-9bbe-b07410de16e7 nodeName:}" failed. No retries permitted until 2025-12-15 05:51:18.338227285 +0000 UTC m=+842.034739202 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/08c6df63-e1b2-4194-9bbe-b07410de16e7-etc-swift") pod "swift-storage-0" (UID: "08c6df63-e1b2-4194-9bbe-b07410de16e7") : configmap "swift-ring-files" not found Dec 15 05:51:17 crc kubenswrapper[4747]: I1215 05:51:17.838367 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/08c6df63-e1b2-4194-9bbe-b07410de16e7-lock\") pod \"swift-storage-0\" (UID: \"08c6df63-e1b2-4194-9bbe-b07410de16e7\") " pod="openstack/swift-storage-0" Dec 15 05:51:17 crc kubenswrapper[4747]: I1215 05:51:17.838464 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/08c6df63-e1b2-4194-9bbe-b07410de16e7-cache\") pod \"swift-storage-0\" (UID: \"08c6df63-e1b2-4194-9bbe-b07410de16e7\") " pod="openstack/swift-storage-0" Dec 15 05:51:17 crc kubenswrapper[4747]: I1215 05:51:17.838594 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"08c6df63-e1b2-4194-9bbe-b07410de16e7\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/swift-storage-0" Dec 15 05:51:17 crc kubenswrapper[4747]: I1215 05:51:17.854353 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk4lp\" (UniqueName: \"kubernetes.io/projected/08c6df63-e1b2-4194-9bbe-b07410de16e7-kube-api-access-kk4lp\") pod \"swift-storage-0\" (UID: \"08c6df63-e1b2-4194-9bbe-b07410de16e7\") " pod="openstack/swift-storage-0" Dec 15 05:51:17 crc kubenswrapper[4747]: I1215 05:51:17.855298 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"08c6df63-e1b2-4194-9bbe-b07410de16e7\") " pod="openstack/swift-storage-0" Dec 15 05:51:17 crc kubenswrapper[4747]: I1215 05:51:17.941328 4747 generic.go:334] "Generic (PLEG): container finished" podID="bade9597-335c-43a4-9477-ab4f08999fa8" containerID="568389ca60b28559ad7842ff649954380a7d61c626dd8eb971c620e30c076956" exitCode=0 Dec 15 05:51:17 crc kubenswrapper[4747]: I1215 05:51:17.941554 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fb8b8965-6vtwl" event={"ID":"bade9597-335c-43a4-9477-ab4f08999fa8","Type":"ContainerDied","Data":"568389ca60b28559ad7842ff649954380a7d61c626dd8eb971c620e30c076956"} Dec 15 05:51:17 crc kubenswrapper[4747]: I1215 05:51:17.942322 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fb8b8965-6vtwl" event={"ID":"bade9597-335c-43a4-9477-ab4f08999fa8","Type":"ContainerStarted","Data":"03644ad93afaa3dd9ce7257f50b3b316cef34cb35104dbf4127b5f663cb61414"} Dec 15 05:51:18 crc kubenswrapper[4747]: I1215 05:51:18.222919 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-f2kl2"] Dec 15 05:51:18 crc kubenswrapper[4747]: I1215 05:51:18.224350 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-f2kl2" Dec 15 05:51:18 crc kubenswrapper[4747]: I1215 05:51:18.226005 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Dec 15 05:51:18 crc kubenswrapper[4747]: I1215 05:51:18.226263 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Dec 15 05:51:18 crc kubenswrapper[4747]: I1215 05:51:18.227076 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 15 05:51:18 crc kubenswrapper[4747]: I1215 05:51:18.235616 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-f2kl2"] Dec 15 05:51:18 crc kubenswrapper[4747]: I1215 05:51:18.347615 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6a5299e8-666f-431f-9ecc-5dcc74352e38-etc-swift\") pod \"swift-ring-rebalance-f2kl2\" (UID: \"6a5299e8-666f-431f-9ecc-5dcc74352e38\") " pod="openstack/swift-ring-rebalance-f2kl2" Dec 15 05:51:18 crc kubenswrapper[4747]: I1215 05:51:18.347719 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a5299e8-666f-431f-9ecc-5dcc74352e38-combined-ca-bundle\") pod \"swift-ring-rebalance-f2kl2\" (UID: \"6a5299e8-666f-431f-9ecc-5dcc74352e38\") " pod="openstack/swift-ring-rebalance-f2kl2" Dec 15 05:51:18 crc kubenswrapper[4747]: I1215 05:51:18.347840 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6a5299e8-666f-431f-9ecc-5dcc74352e38-ring-data-devices\") pod \"swift-ring-rebalance-f2kl2\" (UID: \"6a5299e8-666f-431f-9ecc-5dcc74352e38\") " pod="openstack/swift-ring-rebalance-f2kl2" Dec 15 05:51:18 crc kubenswrapper[4747]: I1215 05:51:18.347911 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6a5299e8-666f-431f-9ecc-5dcc74352e38-dispersionconf\") pod \"swift-ring-rebalance-f2kl2\" (UID: \"6a5299e8-666f-431f-9ecc-5dcc74352e38\") " pod="openstack/swift-ring-rebalance-f2kl2" Dec 15 05:51:18 crc kubenswrapper[4747]: I1215 05:51:18.347971 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6a5299e8-666f-431f-9ecc-5dcc74352e38-swiftconf\") pod \"swift-ring-rebalance-f2kl2\" (UID: \"6a5299e8-666f-431f-9ecc-5dcc74352e38\") " pod="openstack/swift-ring-rebalance-f2kl2" Dec 15 05:51:18 crc kubenswrapper[4747]: I1215 05:51:18.348020 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6a5299e8-666f-431f-9ecc-5dcc74352e38-scripts\") pod \"swift-ring-rebalance-f2kl2\" (UID: \"6a5299e8-666f-431f-9ecc-5dcc74352e38\") " pod="openstack/swift-ring-rebalance-f2kl2" Dec 15 05:51:18 crc kubenswrapper[4747]: I1215 05:51:18.348164 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d526\" (UniqueName: \"kubernetes.io/projected/6a5299e8-666f-431f-9ecc-5dcc74352e38-kube-api-access-7d526\") pod \"swift-ring-rebalance-f2kl2\" (UID: \"6a5299e8-666f-431f-9ecc-5dcc74352e38\") " pod="openstack/swift-ring-rebalance-f2kl2" Dec 15 05:51:18 crc kubenswrapper[4747]: I1215 05:51:18.348224 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/08c6df63-e1b2-4194-9bbe-b07410de16e7-etc-swift\") pod \"swift-storage-0\" (UID: \"08c6df63-e1b2-4194-9bbe-b07410de16e7\") " pod="openstack/swift-storage-0" Dec 15 05:51:18 crc kubenswrapper[4747]: E1215 05:51:18.348392 4747 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 15 05:51:18 crc kubenswrapper[4747]: E1215 05:51:18.348426 4747 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 15 05:51:18 crc kubenswrapper[4747]: E1215 05:51:18.348498 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/08c6df63-e1b2-4194-9bbe-b07410de16e7-etc-swift podName:08c6df63-e1b2-4194-9bbe-b07410de16e7 nodeName:}" failed. No retries permitted until 2025-12-15 05:51:19.348473096 +0000 UTC m=+843.044985012 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/08c6df63-e1b2-4194-9bbe-b07410de16e7-etc-swift") pod "swift-storage-0" (UID: "08c6df63-e1b2-4194-9bbe-b07410de16e7") : configmap "swift-ring-files" not found Dec 15 05:51:18 crc kubenswrapper[4747]: I1215 05:51:18.450606 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6a5299e8-666f-431f-9ecc-5dcc74352e38-etc-swift\") pod \"swift-ring-rebalance-f2kl2\" (UID: \"6a5299e8-666f-431f-9ecc-5dcc74352e38\") " pod="openstack/swift-ring-rebalance-f2kl2" Dec 15 05:51:18 crc kubenswrapper[4747]: I1215 05:51:18.450705 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a5299e8-666f-431f-9ecc-5dcc74352e38-combined-ca-bundle\") pod \"swift-ring-rebalance-f2kl2\" (UID: \"6a5299e8-666f-431f-9ecc-5dcc74352e38\") " pod="openstack/swift-ring-rebalance-f2kl2" Dec 15 05:51:18 crc kubenswrapper[4747]: I1215 05:51:18.450811 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6a5299e8-666f-431f-9ecc-5dcc74352e38-ring-data-devices\") pod \"swift-ring-rebalance-f2kl2\" (UID: \"6a5299e8-666f-431f-9ecc-5dcc74352e38\") " pod="openstack/swift-ring-rebalance-f2kl2" Dec 15 05:51:18 crc kubenswrapper[4747]: I1215 05:51:18.450877 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6a5299e8-666f-431f-9ecc-5dcc74352e38-dispersionconf\") pod \"swift-ring-rebalance-f2kl2\" (UID: \"6a5299e8-666f-431f-9ecc-5dcc74352e38\") " pod="openstack/swift-ring-rebalance-f2kl2" Dec 15 05:51:18 crc kubenswrapper[4747]: I1215 05:51:18.450909 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6a5299e8-666f-431f-9ecc-5dcc74352e38-swiftconf\") pod \"swift-ring-rebalance-f2kl2\" (UID: \"6a5299e8-666f-431f-9ecc-5dcc74352e38\") " pod="openstack/swift-ring-rebalance-f2kl2" Dec 15 05:51:18 crc kubenswrapper[4747]: I1215 05:51:18.450972 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6a5299e8-666f-431f-9ecc-5dcc74352e38-scripts\") pod \"swift-ring-rebalance-f2kl2\" (UID: \"6a5299e8-666f-431f-9ecc-5dcc74352e38\") " pod="openstack/swift-ring-rebalance-f2kl2" Dec 15 05:51:18 crc kubenswrapper[4747]: I1215 05:51:18.451078 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d526\" (UniqueName: \"kubernetes.io/projected/6a5299e8-666f-431f-9ecc-5dcc74352e38-kube-api-access-7d526\") pod \"swift-ring-rebalance-f2kl2\" (UID: \"6a5299e8-666f-431f-9ecc-5dcc74352e38\") " pod="openstack/swift-ring-rebalance-f2kl2" Dec 15 05:51:18 crc kubenswrapper[4747]: I1215 05:51:18.451237 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6a5299e8-666f-431f-9ecc-5dcc74352e38-etc-swift\") pod \"swift-ring-rebalance-f2kl2\" (UID: \"6a5299e8-666f-431f-9ecc-5dcc74352e38\") " pod="openstack/swift-ring-rebalance-f2kl2" Dec 15 05:51:18 crc kubenswrapper[4747]: I1215 05:51:18.451766 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6a5299e8-666f-431f-9ecc-5dcc74352e38-ring-data-devices\") pod \"swift-ring-rebalance-f2kl2\" (UID: \"6a5299e8-666f-431f-9ecc-5dcc74352e38\") " pod="openstack/swift-ring-rebalance-f2kl2" Dec 15 05:51:18 crc kubenswrapper[4747]: I1215 05:51:18.452266 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6a5299e8-666f-431f-9ecc-5dcc74352e38-scripts\") pod \"swift-ring-rebalance-f2kl2\" (UID: \"6a5299e8-666f-431f-9ecc-5dcc74352e38\") " pod="openstack/swift-ring-rebalance-f2kl2" Dec 15 05:51:18 crc kubenswrapper[4747]: I1215 05:51:18.455146 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6a5299e8-666f-431f-9ecc-5dcc74352e38-dispersionconf\") pod \"swift-ring-rebalance-f2kl2\" (UID: \"6a5299e8-666f-431f-9ecc-5dcc74352e38\") " pod="openstack/swift-ring-rebalance-f2kl2" Dec 15 05:51:18 crc kubenswrapper[4747]: I1215 05:51:18.455811 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6a5299e8-666f-431f-9ecc-5dcc74352e38-swiftconf\") pod \"swift-ring-rebalance-f2kl2\" (UID: \"6a5299e8-666f-431f-9ecc-5dcc74352e38\") " pod="openstack/swift-ring-rebalance-f2kl2" Dec 15 05:51:18 crc kubenswrapper[4747]: I1215 05:51:18.456154 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a5299e8-666f-431f-9ecc-5dcc74352e38-combined-ca-bundle\") pod \"swift-ring-rebalance-f2kl2\" (UID: \"6a5299e8-666f-431f-9ecc-5dcc74352e38\") " pod="openstack/swift-ring-rebalance-f2kl2" Dec 15 05:51:18 crc kubenswrapper[4747]: I1215 05:51:18.467729 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d526\" (UniqueName: \"kubernetes.io/projected/6a5299e8-666f-431f-9ecc-5dcc74352e38-kube-api-access-7d526\") pod \"swift-ring-rebalance-f2kl2\" (UID: \"6a5299e8-666f-431f-9ecc-5dcc74352e38\") " pod="openstack/swift-ring-rebalance-f2kl2" Dec 15 05:51:18 crc kubenswrapper[4747]: I1215 05:51:18.539469 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-f2kl2" Dec 15 05:51:18 crc kubenswrapper[4747]: I1215 05:51:18.929566 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-f2kl2"] Dec 15 05:51:18 crc kubenswrapper[4747]: I1215 05:51:18.951787 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-f2kl2" event={"ID":"6a5299e8-666f-431f-9ecc-5dcc74352e38","Type":"ContainerStarted","Data":"5c3f0cfb67f6a159e8ea0a387fe2e9206a3bf9c9e3d58d991e5668561de15056"} Dec 15 05:51:18 crc kubenswrapper[4747]: I1215 05:51:18.954850 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fb8b8965-6vtwl" event={"ID":"bade9597-335c-43a4-9477-ab4f08999fa8","Type":"ContainerStarted","Data":"f52a5430d4461fd55c11ee7a0137c08c01ef2af7d98f22774a239b372cbabebd"} Dec 15 05:51:18 crc kubenswrapper[4747]: I1215 05:51:18.978913 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67fb8b8965-6vtwl" podStartSLOduration=2.978892601 podStartE2EDuration="2.978892601s" podCreationTimestamp="2025-12-15 05:51:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:51:18.973740672 +0000 UTC m=+842.670252590" watchObservedRunningTime="2025-12-15 05:51:18.978892601 +0000 UTC m=+842.675404518" Dec 15 05:51:19 crc kubenswrapper[4747]: I1215 05:51:19.369428 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/08c6df63-e1b2-4194-9bbe-b07410de16e7-etc-swift\") pod \"swift-storage-0\" (UID: \"08c6df63-e1b2-4194-9bbe-b07410de16e7\") " pod="openstack/swift-storage-0" Dec 15 05:51:19 crc kubenswrapper[4747]: E1215 05:51:19.369647 4747 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 15 05:51:19 crc kubenswrapper[4747]: E1215 05:51:19.369680 4747 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 15 05:51:19 crc kubenswrapper[4747]: E1215 05:51:19.369749 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/08c6df63-e1b2-4194-9bbe-b07410de16e7-etc-swift podName:08c6df63-e1b2-4194-9bbe-b07410de16e7 nodeName:}" failed. No retries permitted until 2025-12-15 05:51:21.369730266 +0000 UTC m=+845.066242183 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/08c6df63-e1b2-4194-9bbe-b07410de16e7-etc-swift") pod "swift-storage-0" (UID: "08c6df63-e1b2-4194-9bbe-b07410de16e7") : configmap "swift-ring-files" not found Dec 15 05:51:19 crc kubenswrapper[4747]: I1215 05:51:19.962878 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67fb8b8965-6vtwl" Dec 15 05:51:20 crc kubenswrapper[4747]: I1215 05:51:20.314001 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-195a-account-create-update-zngwp"] Dec 15 05:51:20 crc kubenswrapper[4747]: I1215 05:51:20.315383 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-195a-account-create-update-zngwp" Dec 15 05:51:20 crc kubenswrapper[4747]: I1215 05:51:20.317166 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Dec 15 05:51:20 crc kubenswrapper[4747]: I1215 05:51:20.322062 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-wkf55"] Dec 15 05:51:20 crc kubenswrapper[4747]: I1215 05:51:20.323431 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-wkf55" Dec 15 05:51:20 crc kubenswrapper[4747]: I1215 05:51:20.343396 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-195a-account-create-update-zngwp"] Dec 15 05:51:20 crc kubenswrapper[4747]: I1215 05:51:20.349357 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-wkf55"] Dec 15 05:51:20 crc kubenswrapper[4747]: I1215 05:51:20.491253 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nwp7\" (UniqueName: \"kubernetes.io/projected/f75d9da5-1f04-45c4-87ad-b236ed88c43c-kube-api-access-9nwp7\") pod \"glance-195a-account-create-update-zngwp\" (UID: \"f75d9da5-1f04-45c4-87ad-b236ed88c43c\") " pod="openstack/glance-195a-account-create-update-zngwp" Dec 15 05:51:20 crc kubenswrapper[4747]: I1215 05:51:20.491343 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f75d9da5-1f04-45c4-87ad-b236ed88c43c-operator-scripts\") pod \"glance-195a-account-create-update-zngwp\" (UID: \"f75d9da5-1f04-45c4-87ad-b236ed88c43c\") " pod="openstack/glance-195a-account-create-update-zngwp" Dec 15 05:51:20 crc kubenswrapper[4747]: I1215 05:51:20.491386 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwndm\" (UniqueName: \"kubernetes.io/projected/ae24795d-f2ab-4a80-a939-fa3bddb8f742-kube-api-access-vwndm\") pod \"glance-db-create-wkf55\" (UID: \"ae24795d-f2ab-4a80-a939-fa3bddb8f742\") " pod="openstack/glance-db-create-wkf55" Dec 15 05:51:20 crc kubenswrapper[4747]: I1215 05:51:20.491451 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae24795d-f2ab-4a80-a939-fa3bddb8f742-operator-scripts\") pod \"glance-db-create-wkf55\" (UID: \"ae24795d-f2ab-4a80-a939-fa3bddb8f742\") " pod="openstack/glance-db-create-wkf55" Dec 15 05:51:20 crc kubenswrapper[4747]: I1215 05:51:20.593471 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f75d9da5-1f04-45c4-87ad-b236ed88c43c-operator-scripts\") pod \"glance-195a-account-create-update-zngwp\" (UID: \"f75d9da5-1f04-45c4-87ad-b236ed88c43c\") " pod="openstack/glance-195a-account-create-update-zngwp" Dec 15 05:51:20 crc kubenswrapper[4747]: I1215 05:51:20.593553 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwndm\" (UniqueName: \"kubernetes.io/projected/ae24795d-f2ab-4a80-a939-fa3bddb8f742-kube-api-access-vwndm\") pod \"glance-db-create-wkf55\" (UID: \"ae24795d-f2ab-4a80-a939-fa3bddb8f742\") " pod="openstack/glance-db-create-wkf55" Dec 15 05:51:20 crc kubenswrapper[4747]: I1215 05:51:20.593643 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae24795d-f2ab-4a80-a939-fa3bddb8f742-operator-scripts\") pod \"glance-db-create-wkf55\" (UID: \"ae24795d-f2ab-4a80-a939-fa3bddb8f742\") " pod="openstack/glance-db-create-wkf55" Dec 15 05:51:20 crc kubenswrapper[4747]: I1215 05:51:20.593805 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nwp7\" (UniqueName: \"kubernetes.io/projected/f75d9da5-1f04-45c4-87ad-b236ed88c43c-kube-api-access-9nwp7\") pod \"glance-195a-account-create-update-zngwp\" (UID: \"f75d9da5-1f04-45c4-87ad-b236ed88c43c\") " pod="openstack/glance-195a-account-create-update-zngwp" Dec 15 05:51:20 crc kubenswrapper[4747]: I1215 05:51:20.594361 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f75d9da5-1f04-45c4-87ad-b236ed88c43c-operator-scripts\") pod \"glance-195a-account-create-update-zngwp\" (UID: \"f75d9da5-1f04-45c4-87ad-b236ed88c43c\") " pod="openstack/glance-195a-account-create-update-zngwp" Dec 15 05:51:20 crc kubenswrapper[4747]: I1215 05:51:20.594843 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae24795d-f2ab-4a80-a939-fa3bddb8f742-operator-scripts\") pod \"glance-db-create-wkf55\" (UID: \"ae24795d-f2ab-4a80-a939-fa3bddb8f742\") " pod="openstack/glance-db-create-wkf55" Dec 15 05:51:20 crc kubenswrapper[4747]: I1215 05:51:20.613390 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nwp7\" (UniqueName: \"kubernetes.io/projected/f75d9da5-1f04-45c4-87ad-b236ed88c43c-kube-api-access-9nwp7\") pod \"glance-195a-account-create-update-zngwp\" (UID: \"f75d9da5-1f04-45c4-87ad-b236ed88c43c\") " pod="openstack/glance-195a-account-create-update-zngwp" Dec 15 05:51:20 crc kubenswrapper[4747]: I1215 05:51:20.614021 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwndm\" (UniqueName: \"kubernetes.io/projected/ae24795d-f2ab-4a80-a939-fa3bddb8f742-kube-api-access-vwndm\") pod \"glance-db-create-wkf55\" (UID: \"ae24795d-f2ab-4a80-a939-fa3bddb8f742\") " pod="openstack/glance-db-create-wkf55" Dec 15 05:51:20 crc kubenswrapper[4747]: I1215 05:51:20.635226 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-195a-account-create-update-zngwp" Dec 15 05:51:20 crc kubenswrapper[4747]: I1215 05:51:20.645908 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-wkf55" Dec 15 05:51:21 crc kubenswrapper[4747]: I1215 05:51:21.066053 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-wkf55"] Dec 15 05:51:21 crc kubenswrapper[4747]: W1215 05:51:21.073271 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae24795d_f2ab_4a80_a939_fa3bddb8f742.slice/crio-e7c1cb2cb517d7c78a42df0dc1396636cee678d71c94172886587979a053c159 WatchSource:0}: Error finding container e7c1cb2cb517d7c78a42df0dc1396636cee678d71c94172886587979a053c159: Status 404 returned error can't find the container with id e7c1cb2cb517d7c78a42df0dc1396636cee678d71c94172886587979a053c159 Dec 15 05:51:21 crc kubenswrapper[4747]: I1215 05:51:21.126687 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-195a-account-create-update-zngwp"] Dec 15 05:51:21 crc kubenswrapper[4747]: I1215 05:51:21.413335 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/08c6df63-e1b2-4194-9bbe-b07410de16e7-etc-swift\") pod \"swift-storage-0\" (UID: \"08c6df63-e1b2-4194-9bbe-b07410de16e7\") " pod="openstack/swift-storage-0" Dec 15 05:51:21 crc kubenswrapper[4747]: E1215 05:51:21.413561 4747 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 15 05:51:21 crc kubenswrapper[4747]: E1215 05:51:21.413899 4747 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 15 05:51:21 crc kubenswrapper[4747]: E1215 05:51:21.413995 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/08c6df63-e1b2-4194-9bbe-b07410de16e7-etc-swift podName:08c6df63-e1b2-4194-9bbe-b07410de16e7 nodeName:}" failed. No retries permitted until 2025-12-15 05:51:25.413965433 +0000 UTC m=+849.110477350 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/08c6df63-e1b2-4194-9bbe-b07410de16e7-etc-swift") pod "swift-storage-0" (UID: "08c6df63-e1b2-4194-9bbe-b07410de16e7") : configmap "swift-ring-files" not found Dec 15 05:51:21 crc kubenswrapper[4747]: I1215 05:51:21.984238 4747 generic.go:334] "Generic (PLEG): container finished" podID="ae24795d-f2ab-4a80-a939-fa3bddb8f742" containerID="8d9855913de655fb7f5f4f738bd02cd16538a3be3417c4708183d8ceb37c2436" exitCode=0 Dec 15 05:51:21 crc kubenswrapper[4747]: I1215 05:51:21.984328 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-wkf55" event={"ID":"ae24795d-f2ab-4a80-a939-fa3bddb8f742","Type":"ContainerDied","Data":"8d9855913de655fb7f5f4f738bd02cd16538a3be3417c4708183d8ceb37c2436"} Dec 15 05:51:21 crc kubenswrapper[4747]: I1215 05:51:21.984386 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-wkf55" event={"ID":"ae24795d-f2ab-4a80-a939-fa3bddb8f742","Type":"ContainerStarted","Data":"e7c1cb2cb517d7c78a42df0dc1396636cee678d71c94172886587979a053c159"} Dec 15 05:51:21 crc kubenswrapper[4747]: I1215 05:51:21.986790 4747 generic.go:334] "Generic (PLEG): container finished" podID="f75d9da5-1f04-45c4-87ad-b236ed88c43c" containerID="029acaa05fab390d66c6e409de12a9371987344077287c949f05ad55002b0352" exitCode=0 Dec 15 05:51:21 crc kubenswrapper[4747]: I1215 05:51:21.986818 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-195a-account-create-update-zngwp" event={"ID":"f75d9da5-1f04-45c4-87ad-b236ed88c43c","Type":"ContainerDied","Data":"029acaa05fab390d66c6e409de12a9371987344077287c949f05ad55002b0352"} Dec 15 05:51:21 crc kubenswrapper[4747]: I1215 05:51:21.986835 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-195a-account-create-update-zngwp" event={"ID":"f75d9da5-1f04-45c4-87ad-b236ed88c43c","Type":"ContainerStarted","Data":"73ddbacdd7e0e17e076807494aa228e63fdf726550cbfe6bfb31d6cfbd93c8c0"} Dec 15 05:51:24 crc kubenswrapper[4747]: I1215 05:51:24.376715 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-z7r2t"] Dec 15 05:51:24 crc kubenswrapper[4747]: I1215 05:51:24.378335 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-z7r2t" Dec 15 05:51:24 crc kubenswrapper[4747]: I1215 05:51:24.385724 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-z7r2t"] Dec 15 05:51:24 crc kubenswrapper[4747]: I1215 05:51:24.465319 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c319a7c-6074-41bf-9410-03fca21a603c-operator-scripts\") pod \"keystone-db-create-z7r2t\" (UID: \"9c319a7c-6074-41bf-9410-03fca21a603c\") " pod="openstack/keystone-db-create-z7r2t" Dec 15 05:51:24 crc kubenswrapper[4747]: I1215 05:51:24.465524 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h54vf\" (UniqueName: \"kubernetes.io/projected/9c319a7c-6074-41bf-9410-03fca21a603c-kube-api-access-h54vf\") pod \"keystone-db-create-z7r2t\" (UID: \"9c319a7c-6074-41bf-9410-03fca21a603c\") " pod="openstack/keystone-db-create-z7r2t" Dec 15 05:51:24 crc kubenswrapper[4747]: I1215 05:51:24.477776 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-b906-account-create-update-5c6w9"] Dec 15 05:51:24 crc kubenswrapper[4747]: I1215 05:51:24.479122 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b906-account-create-update-5c6w9" Dec 15 05:51:24 crc kubenswrapper[4747]: I1215 05:51:24.482221 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 15 05:51:24 crc kubenswrapper[4747]: I1215 05:51:24.484328 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b906-account-create-update-5c6w9"] Dec 15 05:51:24 crc kubenswrapper[4747]: I1215 05:51:24.567466 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h54vf\" (UniqueName: \"kubernetes.io/projected/9c319a7c-6074-41bf-9410-03fca21a603c-kube-api-access-h54vf\") pod \"keystone-db-create-z7r2t\" (UID: \"9c319a7c-6074-41bf-9410-03fca21a603c\") " pod="openstack/keystone-db-create-z7r2t" Dec 15 05:51:24 crc kubenswrapper[4747]: I1215 05:51:24.567529 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wjcr\" (UniqueName: \"kubernetes.io/projected/07706a57-9497-4ff4-8e6f-92c84e806c2a-kube-api-access-8wjcr\") pod \"keystone-b906-account-create-update-5c6w9\" (UID: \"07706a57-9497-4ff4-8e6f-92c84e806c2a\") " pod="openstack/keystone-b906-account-create-update-5c6w9" Dec 15 05:51:24 crc kubenswrapper[4747]: I1215 05:51:24.567636 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07706a57-9497-4ff4-8e6f-92c84e806c2a-operator-scripts\") pod \"keystone-b906-account-create-update-5c6w9\" (UID: \"07706a57-9497-4ff4-8e6f-92c84e806c2a\") " pod="openstack/keystone-b906-account-create-update-5c6w9" Dec 15 05:51:24 crc kubenswrapper[4747]: I1215 05:51:24.567698 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c319a7c-6074-41bf-9410-03fca21a603c-operator-scripts\") pod \"keystone-db-create-z7r2t\" (UID: \"9c319a7c-6074-41bf-9410-03fca21a603c\") " pod="openstack/keystone-db-create-z7r2t" Dec 15 05:51:24 crc kubenswrapper[4747]: I1215 05:51:24.568897 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c319a7c-6074-41bf-9410-03fca21a603c-operator-scripts\") pod \"keystone-db-create-z7r2t\" (UID: \"9c319a7c-6074-41bf-9410-03fca21a603c\") " pod="openstack/keystone-db-create-z7r2t" Dec 15 05:51:24 crc kubenswrapper[4747]: I1215 05:51:24.585986 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h54vf\" (UniqueName: \"kubernetes.io/projected/9c319a7c-6074-41bf-9410-03fca21a603c-kube-api-access-h54vf\") pod \"keystone-db-create-z7r2t\" (UID: \"9c319a7c-6074-41bf-9410-03fca21a603c\") " pod="openstack/keystone-db-create-z7r2t" Dec 15 05:51:24 crc kubenswrapper[4747]: I1215 05:51:24.669681 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07706a57-9497-4ff4-8e6f-92c84e806c2a-operator-scripts\") pod \"keystone-b906-account-create-update-5c6w9\" (UID: \"07706a57-9497-4ff4-8e6f-92c84e806c2a\") " pod="openstack/keystone-b906-account-create-update-5c6w9" Dec 15 05:51:24 crc kubenswrapper[4747]: I1215 05:51:24.670390 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wjcr\" (UniqueName: \"kubernetes.io/projected/07706a57-9497-4ff4-8e6f-92c84e806c2a-kube-api-access-8wjcr\") pod \"keystone-b906-account-create-update-5c6w9\" (UID: \"07706a57-9497-4ff4-8e6f-92c84e806c2a\") " pod="openstack/keystone-b906-account-create-update-5c6w9" Dec 15 05:51:24 crc kubenswrapper[4747]: I1215 05:51:24.670670 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07706a57-9497-4ff4-8e6f-92c84e806c2a-operator-scripts\") pod \"keystone-b906-account-create-update-5c6w9\" (UID: \"07706a57-9497-4ff4-8e6f-92c84e806c2a\") " pod="openstack/keystone-b906-account-create-update-5c6w9" Dec 15 05:51:24 crc kubenswrapper[4747]: I1215 05:51:24.686654 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wjcr\" (UniqueName: \"kubernetes.io/projected/07706a57-9497-4ff4-8e6f-92c84e806c2a-kube-api-access-8wjcr\") pod \"keystone-b906-account-create-update-5c6w9\" (UID: \"07706a57-9497-4ff4-8e6f-92c84e806c2a\") " pod="openstack/keystone-b906-account-create-update-5c6w9" Dec 15 05:51:24 crc kubenswrapper[4747]: I1215 05:51:24.699629 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-z7r2t" Dec 15 05:51:24 crc kubenswrapper[4747]: I1215 05:51:24.771689 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-wwznk"] Dec 15 05:51:24 crc kubenswrapper[4747]: I1215 05:51:24.773673 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-wwznk" Dec 15 05:51:24 crc kubenswrapper[4747]: I1215 05:51:24.778428 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-wwznk"] Dec 15 05:51:24 crc kubenswrapper[4747]: I1215 05:51:24.791795 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b906-account-create-update-5c6w9" Dec 15 05:51:24 crc kubenswrapper[4747]: I1215 05:51:24.866983 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-693d-account-create-update-87r57"] Dec 15 05:51:24 crc kubenswrapper[4747]: I1215 05:51:24.868213 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-693d-account-create-update-87r57" Dec 15 05:51:24 crc kubenswrapper[4747]: I1215 05:51:24.870818 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 15 05:51:24 crc kubenswrapper[4747]: I1215 05:51:24.875071 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8da0056-3ad7-49f6-857d-ad1710ecf088-operator-scripts\") pod \"placement-db-create-wwznk\" (UID: \"f8da0056-3ad7-49f6-857d-ad1710ecf088\") " pod="openstack/placement-db-create-wwznk" Dec 15 05:51:24 crc kubenswrapper[4747]: I1215 05:51:24.875121 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkgnf\" (UniqueName: \"kubernetes.io/projected/f8da0056-3ad7-49f6-857d-ad1710ecf088-kube-api-access-mkgnf\") pod \"placement-db-create-wwznk\" (UID: \"f8da0056-3ad7-49f6-857d-ad1710ecf088\") " pod="openstack/placement-db-create-wwznk" Dec 15 05:51:24 crc kubenswrapper[4747]: I1215 05:51:24.884848 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-693d-account-create-update-87r57"] Dec 15 05:51:24 crc kubenswrapper[4747]: I1215 05:51:24.977592 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkgnf\" (UniqueName: \"kubernetes.io/projected/f8da0056-3ad7-49f6-857d-ad1710ecf088-kube-api-access-mkgnf\") pod \"placement-db-create-wwznk\" (UID: \"f8da0056-3ad7-49f6-857d-ad1710ecf088\") " pod="openstack/placement-db-create-wwznk" Dec 15 05:51:24 crc kubenswrapper[4747]: I1215 05:51:24.977765 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4rrn\" (UniqueName: \"kubernetes.io/projected/f339ea36-b8e2-4309-aeb3-aa8d4a2eb137-kube-api-access-h4rrn\") pod \"placement-693d-account-create-update-87r57\" (UID: \"f339ea36-b8e2-4309-aeb3-aa8d4a2eb137\") " pod="openstack/placement-693d-account-create-update-87r57" Dec 15 05:51:24 crc kubenswrapper[4747]: I1215 05:51:24.977812 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f339ea36-b8e2-4309-aeb3-aa8d4a2eb137-operator-scripts\") pod \"placement-693d-account-create-update-87r57\" (UID: \"f339ea36-b8e2-4309-aeb3-aa8d4a2eb137\") " pod="openstack/placement-693d-account-create-update-87r57" Dec 15 05:51:24 crc kubenswrapper[4747]: I1215 05:51:24.978128 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8da0056-3ad7-49f6-857d-ad1710ecf088-operator-scripts\") pod \"placement-db-create-wwznk\" (UID: \"f8da0056-3ad7-49f6-857d-ad1710ecf088\") " pod="openstack/placement-db-create-wwznk" Dec 15 05:51:24 crc kubenswrapper[4747]: I1215 05:51:24.979009 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8da0056-3ad7-49f6-857d-ad1710ecf088-operator-scripts\") pod \"placement-db-create-wwznk\" (UID: \"f8da0056-3ad7-49f6-857d-ad1710ecf088\") " pod="openstack/placement-db-create-wwznk" Dec 15 05:51:24 crc kubenswrapper[4747]: I1215 05:51:24.995567 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkgnf\" (UniqueName: \"kubernetes.io/projected/f8da0056-3ad7-49f6-857d-ad1710ecf088-kube-api-access-mkgnf\") pod \"placement-db-create-wwznk\" (UID: \"f8da0056-3ad7-49f6-857d-ad1710ecf088\") " pod="openstack/placement-db-create-wwznk" Dec 15 05:51:25 crc kubenswrapper[4747]: I1215 05:51:25.080767 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4rrn\" (UniqueName: \"kubernetes.io/projected/f339ea36-b8e2-4309-aeb3-aa8d4a2eb137-kube-api-access-h4rrn\") pod \"placement-693d-account-create-update-87r57\" (UID: \"f339ea36-b8e2-4309-aeb3-aa8d4a2eb137\") " pod="openstack/placement-693d-account-create-update-87r57" Dec 15 05:51:25 crc kubenswrapper[4747]: I1215 05:51:25.080824 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f339ea36-b8e2-4309-aeb3-aa8d4a2eb137-operator-scripts\") pod \"placement-693d-account-create-update-87r57\" (UID: \"f339ea36-b8e2-4309-aeb3-aa8d4a2eb137\") " pod="openstack/placement-693d-account-create-update-87r57" Dec 15 05:51:25 crc kubenswrapper[4747]: I1215 05:51:25.081820 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f339ea36-b8e2-4309-aeb3-aa8d4a2eb137-operator-scripts\") pod \"placement-693d-account-create-update-87r57\" (UID: \"f339ea36-b8e2-4309-aeb3-aa8d4a2eb137\") " pod="openstack/placement-693d-account-create-update-87r57" Dec 15 05:51:25 crc kubenswrapper[4747]: I1215 05:51:25.097444 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-wwznk" Dec 15 05:51:25 crc kubenswrapper[4747]: I1215 05:51:25.097886 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4rrn\" (UniqueName: \"kubernetes.io/projected/f339ea36-b8e2-4309-aeb3-aa8d4a2eb137-kube-api-access-h4rrn\") pod \"placement-693d-account-create-update-87r57\" (UID: \"f339ea36-b8e2-4309-aeb3-aa8d4a2eb137\") " pod="openstack/placement-693d-account-create-update-87r57" Dec 15 05:51:25 crc kubenswrapper[4747]: I1215 05:51:25.180210 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-693d-account-create-update-87r57" Dec 15 05:51:25 crc kubenswrapper[4747]: I1215 05:51:25.340012 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-195a-account-create-update-zngwp" Dec 15 05:51:25 crc kubenswrapper[4747]: I1215 05:51:25.347510 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-wkf55" Dec 15 05:51:25 crc kubenswrapper[4747]: I1215 05:51:25.491757 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae24795d-f2ab-4a80-a939-fa3bddb8f742-operator-scripts\") pod \"ae24795d-f2ab-4a80-a939-fa3bddb8f742\" (UID: \"ae24795d-f2ab-4a80-a939-fa3bddb8f742\") " Dec 15 05:51:25 crc kubenswrapper[4747]: I1215 05:51:25.493018 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f75d9da5-1f04-45c4-87ad-b236ed88c43c-operator-scripts\") pod \"f75d9da5-1f04-45c4-87ad-b236ed88c43c\" (UID: \"f75d9da5-1f04-45c4-87ad-b236ed88c43c\") " Dec 15 05:51:25 crc kubenswrapper[4747]: I1215 05:51:25.493049 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nwp7\" (UniqueName: \"kubernetes.io/projected/f75d9da5-1f04-45c4-87ad-b236ed88c43c-kube-api-access-9nwp7\") pod \"f75d9da5-1f04-45c4-87ad-b236ed88c43c\" (UID: \"f75d9da5-1f04-45c4-87ad-b236ed88c43c\") " Dec 15 05:51:25 crc kubenswrapper[4747]: I1215 05:51:25.493105 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwndm\" (UniqueName: \"kubernetes.io/projected/ae24795d-f2ab-4a80-a939-fa3bddb8f742-kube-api-access-vwndm\") pod \"ae24795d-f2ab-4a80-a939-fa3bddb8f742\" (UID: \"ae24795d-f2ab-4a80-a939-fa3bddb8f742\") " Dec 15 05:51:25 crc kubenswrapper[4747]: I1215 05:51:25.493705 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/08c6df63-e1b2-4194-9bbe-b07410de16e7-etc-swift\") pod \"swift-storage-0\" (UID: \"08c6df63-e1b2-4194-9bbe-b07410de16e7\") " pod="openstack/swift-storage-0" Dec 15 05:51:25 crc kubenswrapper[4747]: I1215 05:51:25.492846 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae24795d-f2ab-4a80-a939-fa3bddb8f742-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ae24795d-f2ab-4a80-a939-fa3bddb8f742" (UID: "ae24795d-f2ab-4a80-a939-fa3bddb8f742"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:51:25 crc kubenswrapper[4747]: I1215 05:51:25.494345 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f75d9da5-1f04-45c4-87ad-b236ed88c43c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f75d9da5-1f04-45c4-87ad-b236ed88c43c" (UID: "f75d9da5-1f04-45c4-87ad-b236ed88c43c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:51:25 crc kubenswrapper[4747]: E1215 05:51:25.494445 4747 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 15 05:51:25 crc kubenswrapper[4747]: E1215 05:51:25.494458 4747 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 15 05:51:25 crc kubenswrapper[4747]: E1215 05:51:25.494513 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/08c6df63-e1b2-4194-9bbe-b07410de16e7-etc-swift podName:08c6df63-e1b2-4194-9bbe-b07410de16e7 nodeName:}" failed. No retries permitted until 2025-12-15 05:51:33.49449637 +0000 UTC m=+857.191008288 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/08c6df63-e1b2-4194-9bbe-b07410de16e7-etc-swift") pod "swift-storage-0" (UID: "08c6df63-e1b2-4194-9bbe-b07410de16e7") : configmap "swift-ring-files" not found Dec 15 05:51:25 crc kubenswrapper[4747]: I1215 05:51:25.498140 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae24795d-f2ab-4a80-a939-fa3bddb8f742-kube-api-access-vwndm" (OuterVolumeSpecName: "kube-api-access-vwndm") pod "ae24795d-f2ab-4a80-a939-fa3bddb8f742" (UID: "ae24795d-f2ab-4a80-a939-fa3bddb8f742"). InnerVolumeSpecName "kube-api-access-vwndm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:51:25 crc kubenswrapper[4747]: I1215 05:51:25.498232 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f75d9da5-1f04-45c4-87ad-b236ed88c43c-kube-api-access-9nwp7" (OuterVolumeSpecName: "kube-api-access-9nwp7") pod "f75d9da5-1f04-45c4-87ad-b236ed88c43c" (UID: "f75d9da5-1f04-45c4-87ad-b236ed88c43c"). InnerVolumeSpecName "kube-api-access-9nwp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:51:25 crc kubenswrapper[4747]: I1215 05:51:25.539993 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-jmz8h" Dec 15 05:51:25 crc kubenswrapper[4747]: I1215 05:51:25.540086 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-jmz8h" Dec 15 05:51:25 crc kubenswrapper[4747]: I1215 05:51:25.595522 4747 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae24795d-f2ab-4a80-a939-fa3bddb8f742-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:25 crc kubenswrapper[4747]: I1215 05:51:25.595555 4747 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f75d9da5-1f04-45c4-87ad-b236ed88c43c-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:25 crc kubenswrapper[4747]: I1215 05:51:25.595567 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nwp7\" (UniqueName: \"kubernetes.io/projected/f75d9da5-1f04-45c4-87ad-b236ed88c43c-kube-api-access-9nwp7\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:25 crc kubenswrapper[4747]: I1215 05:51:25.595579 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwndm\" (UniqueName: \"kubernetes.io/projected/ae24795d-f2ab-4a80-a939-fa3bddb8f742-kube-api-access-vwndm\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:25 crc kubenswrapper[4747]: I1215 05:51:25.737910 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-b65n4-config-fnljr"] Dec 15 05:51:25 crc kubenswrapper[4747]: E1215 05:51:25.738303 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f75d9da5-1f04-45c4-87ad-b236ed88c43c" containerName="mariadb-account-create-update" Dec 15 05:51:25 crc kubenswrapper[4747]: I1215 05:51:25.738321 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="f75d9da5-1f04-45c4-87ad-b236ed88c43c" containerName="mariadb-account-create-update" Dec 15 05:51:25 crc kubenswrapper[4747]: E1215 05:51:25.738339 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae24795d-f2ab-4a80-a939-fa3bddb8f742" containerName="mariadb-database-create" Dec 15 05:51:25 crc kubenswrapper[4747]: I1215 05:51:25.738346 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae24795d-f2ab-4a80-a939-fa3bddb8f742" containerName="mariadb-database-create" Dec 15 05:51:25 crc kubenswrapper[4747]: I1215 05:51:25.738511 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae24795d-f2ab-4a80-a939-fa3bddb8f742" containerName="mariadb-database-create" Dec 15 05:51:25 crc kubenswrapper[4747]: I1215 05:51:25.738529 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="f75d9da5-1f04-45c4-87ad-b236ed88c43c" containerName="mariadb-account-create-update" Dec 15 05:51:25 crc kubenswrapper[4747]: I1215 05:51:25.739096 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-b65n4-config-fnljr" Dec 15 05:51:25 crc kubenswrapper[4747]: I1215 05:51:25.740588 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 15 05:51:25 crc kubenswrapper[4747]: I1215 05:51:25.750329 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-b65n4-config-fnljr"] Dec 15 05:51:25 crc kubenswrapper[4747]: I1215 05:51:25.787259 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b906-account-create-update-5c6w9"] Dec 15 05:51:25 crc kubenswrapper[4747]: I1215 05:51:25.819540 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-z7r2t"] Dec 15 05:51:25 crc kubenswrapper[4747]: W1215 05:51:25.834380 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c319a7c_6074_41bf_9410_03fca21a603c.slice/crio-828dc4e47ed2d2c95faa60b1e2bfb72fc1b85570afd705506c68702bdfe4732e WatchSource:0}: Error finding container 828dc4e47ed2d2c95faa60b1e2bfb72fc1b85570afd705506c68702bdfe4732e: Status 404 returned error can't find the container with id 828dc4e47ed2d2c95faa60b1e2bfb72fc1b85570afd705506c68702bdfe4732e Dec 15 05:51:25 crc kubenswrapper[4747]: W1215 05:51:25.850422 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8da0056_3ad7_49f6_857d_ad1710ecf088.slice/crio-b1b4476d9aeeb604b0363fc31cde0e94af79369ec1d0eb0f01788f42bb343182 WatchSource:0}: Error finding container b1b4476d9aeeb604b0363fc31cde0e94af79369ec1d0eb0f01788f42bb343182: Status 404 returned error can't find the container with id b1b4476d9aeeb604b0363fc31cde0e94af79369ec1d0eb0f01788f42bb343182 Dec 15 05:51:25 crc kubenswrapper[4747]: I1215 05:51:25.869618 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-wwznk"] Dec 15 05:51:25 crc kubenswrapper[4747]: I1215 05:51:25.892642 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-693d-account-create-update-87r57"] Dec 15 05:51:25 crc kubenswrapper[4747]: I1215 05:51:25.901690 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f3b4638-4016-4edb-94e6-b624336018ce-scripts\") pod \"ovn-controller-b65n4-config-fnljr\" (UID: \"2f3b4638-4016-4edb-94e6-b624336018ce\") " pod="openstack/ovn-controller-b65n4-config-fnljr" Dec 15 05:51:25 crc kubenswrapper[4747]: I1215 05:51:25.901802 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2f3b4638-4016-4edb-94e6-b624336018ce-additional-scripts\") pod \"ovn-controller-b65n4-config-fnljr\" (UID: \"2f3b4638-4016-4edb-94e6-b624336018ce\") " pod="openstack/ovn-controller-b65n4-config-fnljr" Dec 15 05:51:25 crc kubenswrapper[4747]: I1215 05:51:25.901986 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2f3b4638-4016-4edb-94e6-b624336018ce-var-run-ovn\") pod \"ovn-controller-b65n4-config-fnljr\" (UID: \"2f3b4638-4016-4edb-94e6-b624336018ce\") " pod="openstack/ovn-controller-b65n4-config-fnljr" Dec 15 05:51:25 crc kubenswrapper[4747]: I1215 05:51:25.902167 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2f3b4638-4016-4edb-94e6-b624336018ce-var-run\") pod \"ovn-controller-b65n4-config-fnljr\" (UID: \"2f3b4638-4016-4edb-94e6-b624336018ce\") " pod="openstack/ovn-controller-b65n4-config-fnljr" Dec 15 05:51:25 crc kubenswrapper[4747]: I1215 05:51:25.902247 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn2m8\" (UniqueName: \"kubernetes.io/projected/2f3b4638-4016-4edb-94e6-b624336018ce-kube-api-access-rn2m8\") pod \"ovn-controller-b65n4-config-fnljr\" (UID: \"2f3b4638-4016-4edb-94e6-b624336018ce\") " pod="openstack/ovn-controller-b65n4-config-fnljr" Dec 15 05:51:25 crc kubenswrapper[4747]: I1215 05:51:25.902456 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2f3b4638-4016-4edb-94e6-b624336018ce-var-log-ovn\") pod \"ovn-controller-b65n4-config-fnljr\" (UID: \"2f3b4638-4016-4edb-94e6-b624336018ce\") " pod="openstack/ovn-controller-b65n4-config-fnljr" Dec 15 05:51:26 crc kubenswrapper[4747]: I1215 05:51:26.004503 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2f3b4638-4016-4edb-94e6-b624336018ce-var-run\") pod \"ovn-controller-b65n4-config-fnljr\" (UID: \"2f3b4638-4016-4edb-94e6-b624336018ce\") " pod="openstack/ovn-controller-b65n4-config-fnljr" Dec 15 05:51:26 crc kubenswrapper[4747]: I1215 05:51:26.004567 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn2m8\" (UniqueName: \"kubernetes.io/projected/2f3b4638-4016-4edb-94e6-b624336018ce-kube-api-access-rn2m8\") pod \"ovn-controller-b65n4-config-fnljr\" (UID: \"2f3b4638-4016-4edb-94e6-b624336018ce\") " pod="openstack/ovn-controller-b65n4-config-fnljr" Dec 15 05:51:26 crc kubenswrapper[4747]: I1215 05:51:26.004694 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2f3b4638-4016-4edb-94e6-b624336018ce-var-log-ovn\") pod \"ovn-controller-b65n4-config-fnljr\" (UID: \"2f3b4638-4016-4edb-94e6-b624336018ce\") " pod="openstack/ovn-controller-b65n4-config-fnljr" Dec 15 05:51:26 crc kubenswrapper[4747]: I1215 05:51:26.004754 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f3b4638-4016-4edb-94e6-b624336018ce-scripts\") pod \"ovn-controller-b65n4-config-fnljr\" (UID: \"2f3b4638-4016-4edb-94e6-b624336018ce\") " pod="openstack/ovn-controller-b65n4-config-fnljr" Dec 15 05:51:26 crc kubenswrapper[4747]: I1215 05:51:26.004825 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2f3b4638-4016-4edb-94e6-b624336018ce-additional-scripts\") pod \"ovn-controller-b65n4-config-fnljr\" (UID: \"2f3b4638-4016-4edb-94e6-b624336018ce\") " pod="openstack/ovn-controller-b65n4-config-fnljr" Dec 15 05:51:26 crc kubenswrapper[4747]: I1215 05:51:26.004942 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2f3b4638-4016-4edb-94e6-b624336018ce-var-run-ovn\") pod \"ovn-controller-b65n4-config-fnljr\" (UID: \"2f3b4638-4016-4edb-94e6-b624336018ce\") " pod="openstack/ovn-controller-b65n4-config-fnljr" Dec 15 05:51:26 crc kubenswrapper[4747]: I1215 05:51:26.005251 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2f3b4638-4016-4edb-94e6-b624336018ce-var-run-ovn\") pod \"ovn-controller-b65n4-config-fnljr\" (UID: \"2f3b4638-4016-4edb-94e6-b624336018ce\") " pod="openstack/ovn-controller-b65n4-config-fnljr" Dec 15 05:51:26 crc kubenswrapper[4747]: I1215 05:51:26.005262 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2f3b4638-4016-4edb-94e6-b624336018ce-var-run\") pod \"ovn-controller-b65n4-config-fnljr\" (UID: \"2f3b4638-4016-4edb-94e6-b624336018ce\") " pod="openstack/ovn-controller-b65n4-config-fnljr" Dec 15 05:51:26 crc kubenswrapper[4747]: I1215 05:51:26.005262 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2f3b4638-4016-4edb-94e6-b624336018ce-var-log-ovn\") pod \"ovn-controller-b65n4-config-fnljr\" (UID: \"2f3b4638-4016-4edb-94e6-b624336018ce\") " pod="openstack/ovn-controller-b65n4-config-fnljr" Dec 15 05:51:26 crc kubenswrapper[4747]: I1215 05:51:26.005823 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2f3b4638-4016-4edb-94e6-b624336018ce-additional-scripts\") pod \"ovn-controller-b65n4-config-fnljr\" (UID: \"2f3b4638-4016-4edb-94e6-b624336018ce\") " pod="openstack/ovn-controller-b65n4-config-fnljr" Dec 15 05:51:26 crc kubenswrapper[4747]: I1215 05:51:26.007412 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f3b4638-4016-4edb-94e6-b624336018ce-scripts\") pod \"ovn-controller-b65n4-config-fnljr\" (UID: \"2f3b4638-4016-4edb-94e6-b624336018ce\") " pod="openstack/ovn-controller-b65n4-config-fnljr" Dec 15 05:51:26 crc kubenswrapper[4747]: I1215 05:51:26.024820 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-wkf55" event={"ID":"ae24795d-f2ab-4a80-a939-fa3bddb8f742","Type":"ContainerDied","Data":"e7c1cb2cb517d7c78a42df0dc1396636cee678d71c94172886587979a053c159"} Dec 15 05:51:26 crc kubenswrapper[4747]: I1215 05:51:26.024877 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7c1cb2cb517d7c78a42df0dc1396636cee678d71c94172886587979a053c159" Dec 15 05:51:26 crc kubenswrapper[4747]: I1215 05:51:26.024902 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-wkf55" Dec 15 05:51:26 crc kubenswrapper[4747]: I1215 05:51:26.026331 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn2m8\" (UniqueName: \"kubernetes.io/projected/2f3b4638-4016-4edb-94e6-b624336018ce-kube-api-access-rn2m8\") pod \"ovn-controller-b65n4-config-fnljr\" (UID: \"2f3b4638-4016-4edb-94e6-b624336018ce\") " pod="openstack/ovn-controller-b65n4-config-fnljr" Dec 15 05:51:26 crc kubenswrapper[4747]: I1215 05:51:26.026462 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b906-account-create-update-5c6w9" event={"ID":"07706a57-9497-4ff4-8e6f-92c84e806c2a","Type":"ContainerStarted","Data":"36747122a11638d14c7dffe0437bded122022fab9a12fbae9fedad0901eb325f"} Dec 15 05:51:26 crc kubenswrapper[4747]: I1215 05:51:26.026516 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b906-account-create-update-5c6w9" event={"ID":"07706a57-9497-4ff4-8e6f-92c84e806c2a","Type":"ContainerStarted","Data":"95ec5aa058dd3ca9341850d62229fe300ff32a15619d3b0ab80cf2220f286baa"} Dec 15 05:51:26 crc kubenswrapper[4747]: I1215 05:51:26.031464 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-195a-account-create-update-zngwp" event={"ID":"f75d9da5-1f04-45c4-87ad-b236ed88c43c","Type":"ContainerDied","Data":"73ddbacdd7e0e17e076807494aa228e63fdf726550cbfe6bfb31d6cfbd93c8c0"} Dec 15 05:51:26 crc kubenswrapper[4747]: I1215 05:51:26.031502 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73ddbacdd7e0e17e076807494aa228e63fdf726550cbfe6bfb31d6cfbd93c8c0" Dec 15 05:51:26 crc kubenswrapper[4747]: I1215 05:51:26.031518 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-195a-account-create-update-zngwp" Dec 15 05:51:26 crc kubenswrapper[4747]: I1215 05:51:26.033051 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-f2kl2" event={"ID":"6a5299e8-666f-431f-9ecc-5dcc74352e38","Type":"ContainerStarted","Data":"9fa7b729d0f4786d744fc20946a2d28b5a61b6aa27d7144aebda476fa95aafd7"} Dec 15 05:51:26 crc kubenswrapper[4747]: I1215 05:51:26.034993 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-wwznk" event={"ID":"f8da0056-3ad7-49f6-857d-ad1710ecf088","Type":"ContainerStarted","Data":"d315be2bddacfc247d3f9cb5e93548805a5811e387d9ad556bb465273972648c"} Dec 15 05:51:26 crc kubenswrapper[4747]: I1215 05:51:26.035026 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-wwznk" event={"ID":"f8da0056-3ad7-49f6-857d-ad1710ecf088","Type":"ContainerStarted","Data":"b1b4476d9aeeb604b0363fc31cde0e94af79369ec1d0eb0f01788f42bb343182"} Dec 15 05:51:26 crc kubenswrapper[4747]: I1215 05:51:26.038054 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-693d-account-create-update-87r57" event={"ID":"f339ea36-b8e2-4309-aeb3-aa8d4a2eb137","Type":"ContainerStarted","Data":"9bca76fc2f69254ae07fdc6160b6b8b10617e59e5f2ae7bec23ee5b93f871480"} Dec 15 05:51:26 crc kubenswrapper[4747]: I1215 05:51:26.038108 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-693d-account-create-update-87r57" event={"ID":"f339ea36-b8e2-4309-aeb3-aa8d4a2eb137","Type":"ContainerStarted","Data":"0f4aacc898b16a4a9940833f4dc6723d6cd26974a0d7273fc0708e00c61ffac3"} Dec 15 05:51:26 crc kubenswrapper[4747]: I1215 05:51:26.042468 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-z7r2t" event={"ID":"9c319a7c-6074-41bf-9410-03fca21a603c","Type":"ContainerStarted","Data":"edb611d6ebceb5d0c7c671494d6dd8f48e7f94209d79f3837908f999e3ceccd2"} Dec 15 05:51:26 crc kubenswrapper[4747]: I1215 05:51:26.042528 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-z7r2t" event={"ID":"9c319a7c-6074-41bf-9410-03fca21a603c","Type":"ContainerStarted","Data":"828dc4e47ed2d2c95faa60b1e2bfb72fc1b85570afd705506c68702bdfe4732e"} Dec 15 05:51:26 crc kubenswrapper[4747]: I1215 05:51:26.044171 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-b906-account-create-update-5c6w9" podStartSLOduration=2.044159842 podStartE2EDuration="2.044159842s" podCreationTimestamp="2025-12-15 05:51:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:51:26.040982215 +0000 UTC m=+849.737494132" watchObservedRunningTime="2025-12-15 05:51:26.044159842 +0000 UTC m=+849.740671759" Dec 15 05:51:26 crc kubenswrapper[4747]: I1215 05:51:26.059400 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-b65n4-config-fnljr" Dec 15 05:51:26 crc kubenswrapper[4747]: I1215 05:51:26.071853 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-693d-account-create-update-87r57" podStartSLOduration=2.071839339 podStartE2EDuration="2.071839339s" podCreationTimestamp="2025-12-15 05:51:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:51:26.055347233 +0000 UTC m=+849.751859150" watchObservedRunningTime="2025-12-15 05:51:26.071839339 +0000 UTC m=+849.768351256" Dec 15 05:51:26 crc kubenswrapper[4747]: I1215 05:51:26.076444 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-f2kl2" podStartSLOduration=1.62712741 podStartE2EDuration="8.076434643s" podCreationTimestamp="2025-12-15 05:51:18 +0000 UTC" firstStartedPulling="2025-12-15 05:51:18.936578029 +0000 UTC m=+842.633089946" lastFinishedPulling="2025-12-15 05:51:25.385885263 +0000 UTC m=+849.082397179" observedRunningTime="2025-12-15 05:51:26.072919963 +0000 UTC m=+849.769431880" watchObservedRunningTime="2025-12-15 05:51:26.076434643 +0000 UTC m=+849.772946561" Dec 15 05:51:26 crc kubenswrapper[4747]: I1215 05:51:26.092129 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-wwznk" podStartSLOduration=2.092105198 podStartE2EDuration="2.092105198s" podCreationTimestamp="2025-12-15 05:51:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:51:26.086480751 +0000 UTC m=+849.782992669" watchObservedRunningTime="2025-12-15 05:51:26.092105198 +0000 UTC m=+849.788617115" Dec 15 05:51:26 crc kubenswrapper[4747]: I1215 05:51:26.111229 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-z7r2t" podStartSLOduration=2.111207895 podStartE2EDuration="2.111207895s" podCreationTimestamp="2025-12-15 05:51:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:51:26.101693888 +0000 UTC m=+849.798205806" watchObservedRunningTime="2025-12-15 05:51:26.111207895 +0000 UTC m=+849.807719812" Dec 15 05:51:26 crc kubenswrapper[4747]: E1215 05:51:26.182263 4747 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf75d9da5_1f04_45c4_87ad_b236ed88c43c.slice\": RecentStats: unable to find data in memory cache]" Dec 15 05:51:26 crc kubenswrapper[4747]: I1215 05:51:26.495382 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-b65n4-config-fnljr"] Dec 15 05:51:26 crc kubenswrapper[4747]: W1215 05:51:26.496890 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f3b4638_4016_4edb_94e6_b624336018ce.slice/crio-be6b4bfb6febf9e34ba393f7db5d8fc26ed21c460f370df50c55f2351aa55712 WatchSource:0}: Error finding container be6b4bfb6febf9e34ba393f7db5d8fc26ed21c460f370df50c55f2351aa55712: Status 404 returned error can't find the container with id be6b4bfb6febf9e34ba393f7db5d8fc26ed21c460f370df50c55f2351aa55712 Dec 15 05:51:26 crc kubenswrapper[4747]: I1215 05:51:26.758055 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67fb8b8965-6vtwl" Dec 15 05:51:26 crc kubenswrapper[4747]: I1215 05:51:26.806667 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78d59ccb8c-t4s8l"] Dec 15 05:51:26 crc kubenswrapper[4747]: I1215 05:51:26.806896 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-78d59ccb8c-t4s8l" podUID="5f2d45f0-0b7e-4950-b133-c3a27441c33a" containerName="dnsmasq-dns" containerID="cri-o://566a100f4badeff2fff0f1ef1d1a34c66253287bdcfa3a957050a731bcfc5850" gracePeriod=10 Dec 15 05:51:27 crc kubenswrapper[4747]: I1215 05:51:27.049863 4747 generic.go:334] "Generic (PLEG): container finished" podID="f8da0056-3ad7-49f6-857d-ad1710ecf088" containerID="d315be2bddacfc247d3f9cb5e93548805a5811e387d9ad556bb465273972648c" exitCode=0 Dec 15 05:51:27 crc kubenswrapper[4747]: I1215 05:51:27.050066 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-wwznk" event={"ID":"f8da0056-3ad7-49f6-857d-ad1710ecf088","Type":"ContainerDied","Data":"d315be2bddacfc247d3f9cb5e93548805a5811e387d9ad556bb465273972648c"} Dec 15 05:51:27 crc kubenswrapper[4747]: I1215 05:51:27.052880 4747 generic.go:334] "Generic (PLEG): container finished" podID="5f2d45f0-0b7e-4950-b133-c3a27441c33a" containerID="566a100f4badeff2fff0f1ef1d1a34c66253287bdcfa3a957050a731bcfc5850" exitCode=0 Dec 15 05:51:27 crc kubenswrapper[4747]: I1215 05:51:27.053001 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78d59ccb8c-t4s8l" event={"ID":"5f2d45f0-0b7e-4950-b133-c3a27441c33a","Type":"ContainerDied","Data":"566a100f4badeff2fff0f1ef1d1a34c66253287bdcfa3a957050a731bcfc5850"} Dec 15 05:51:27 crc kubenswrapper[4747]: I1215 05:51:27.058615 4747 generic.go:334] "Generic (PLEG): container finished" podID="f339ea36-b8e2-4309-aeb3-aa8d4a2eb137" containerID="9bca76fc2f69254ae07fdc6160b6b8b10617e59e5f2ae7bec23ee5b93f871480" exitCode=0 Dec 15 05:51:27 crc kubenswrapper[4747]: I1215 05:51:27.058683 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-693d-account-create-update-87r57" event={"ID":"f339ea36-b8e2-4309-aeb3-aa8d4a2eb137","Type":"ContainerDied","Data":"9bca76fc2f69254ae07fdc6160b6b8b10617e59e5f2ae7bec23ee5b93f871480"} Dec 15 05:51:27 crc kubenswrapper[4747]: I1215 05:51:27.060549 4747 generic.go:334] "Generic (PLEG): container finished" podID="9c319a7c-6074-41bf-9410-03fca21a603c" containerID="edb611d6ebceb5d0c7c671494d6dd8f48e7f94209d79f3837908f999e3ceccd2" exitCode=0 Dec 15 05:51:27 crc kubenswrapper[4747]: I1215 05:51:27.060650 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-z7r2t" event={"ID":"9c319a7c-6074-41bf-9410-03fca21a603c","Type":"ContainerDied","Data":"edb611d6ebceb5d0c7c671494d6dd8f48e7f94209d79f3837908f999e3ceccd2"} Dec 15 05:51:27 crc kubenswrapper[4747]: I1215 05:51:27.062184 4747 generic.go:334] "Generic (PLEG): container finished" podID="07706a57-9497-4ff4-8e6f-92c84e806c2a" containerID="36747122a11638d14c7dffe0437bded122022fab9a12fbae9fedad0901eb325f" exitCode=0 Dec 15 05:51:27 crc kubenswrapper[4747]: I1215 05:51:27.062215 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b906-account-create-update-5c6w9" event={"ID":"07706a57-9497-4ff4-8e6f-92c84e806c2a","Type":"ContainerDied","Data":"36747122a11638d14c7dffe0437bded122022fab9a12fbae9fedad0901eb325f"} Dec 15 05:51:27 crc kubenswrapper[4747]: I1215 05:51:27.064836 4747 generic.go:334] "Generic (PLEG): container finished" podID="2f3b4638-4016-4edb-94e6-b624336018ce" containerID="0a29c784c81fc753a180f35eb4973ee2dd62fc90b65276d06981bf99cd60274d" exitCode=0 Dec 15 05:51:27 crc kubenswrapper[4747]: I1215 05:51:27.065014 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-b65n4-config-fnljr" event={"ID":"2f3b4638-4016-4edb-94e6-b624336018ce","Type":"ContainerDied","Data":"0a29c784c81fc753a180f35eb4973ee2dd62fc90b65276d06981bf99cd60274d"} Dec 15 05:51:27 crc kubenswrapper[4747]: I1215 05:51:27.065063 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-b65n4-config-fnljr" event={"ID":"2f3b4638-4016-4edb-94e6-b624336018ce","Type":"ContainerStarted","Data":"be6b4bfb6febf9e34ba393f7db5d8fc26ed21c460f370df50c55f2351aa55712"} Dec 15 05:51:27 crc kubenswrapper[4747]: I1215 05:51:27.222516 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78d59ccb8c-t4s8l" Dec 15 05:51:27 crc kubenswrapper[4747]: I1215 05:51:27.336413 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mj8cr\" (UniqueName: \"kubernetes.io/projected/5f2d45f0-0b7e-4950-b133-c3a27441c33a-kube-api-access-mj8cr\") pod \"5f2d45f0-0b7e-4950-b133-c3a27441c33a\" (UID: \"5f2d45f0-0b7e-4950-b133-c3a27441c33a\") " Dec 15 05:51:27 crc kubenswrapper[4747]: I1215 05:51:27.336497 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f2d45f0-0b7e-4950-b133-c3a27441c33a-config\") pod \"5f2d45f0-0b7e-4950-b133-c3a27441c33a\" (UID: \"5f2d45f0-0b7e-4950-b133-c3a27441c33a\") " Dec 15 05:51:27 crc kubenswrapper[4747]: I1215 05:51:27.336551 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f2d45f0-0b7e-4950-b133-c3a27441c33a-ovsdbserver-nb\") pod \"5f2d45f0-0b7e-4950-b133-c3a27441c33a\" (UID: \"5f2d45f0-0b7e-4950-b133-c3a27441c33a\") " Dec 15 05:51:27 crc kubenswrapper[4747]: I1215 05:51:27.336570 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f2d45f0-0b7e-4950-b133-c3a27441c33a-dns-svc\") pod \"5f2d45f0-0b7e-4950-b133-c3a27441c33a\" (UID: \"5f2d45f0-0b7e-4950-b133-c3a27441c33a\") " Dec 15 05:51:27 crc kubenswrapper[4747]: I1215 05:51:27.336631 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f2d45f0-0b7e-4950-b133-c3a27441c33a-ovsdbserver-sb\") pod \"5f2d45f0-0b7e-4950-b133-c3a27441c33a\" (UID: \"5f2d45f0-0b7e-4950-b133-c3a27441c33a\") " Dec 15 05:51:27 crc kubenswrapper[4747]: I1215 05:51:27.350099 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f2d45f0-0b7e-4950-b133-c3a27441c33a-kube-api-access-mj8cr" (OuterVolumeSpecName: "kube-api-access-mj8cr") pod "5f2d45f0-0b7e-4950-b133-c3a27441c33a" (UID: "5f2d45f0-0b7e-4950-b133-c3a27441c33a"). InnerVolumeSpecName "kube-api-access-mj8cr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:51:27 crc kubenswrapper[4747]: I1215 05:51:27.374193 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f2d45f0-0b7e-4950-b133-c3a27441c33a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5f2d45f0-0b7e-4950-b133-c3a27441c33a" (UID: "5f2d45f0-0b7e-4950-b133-c3a27441c33a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:51:27 crc kubenswrapper[4747]: E1215 05:51:27.374244 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5f2d45f0-0b7e-4950-b133-c3a27441c33a-ovsdbserver-sb podName:5f2d45f0-0b7e-4950-b133-c3a27441c33a nodeName:}" failed. No retries permitted until 2025-12-15 05:51:27.874217365 +0000 UTC m=+851.570729282 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ovsdbserver-sb" (UniqueName: "kubernetes.io/configmap/5f2d45f0-0b7e-4950-b133-c3a27441c33a-ovsdbserver-sb") pod "5f2d45f0-0b7e-4950-b133-c3a27441c33a" (UID: "5f2d45f0-0b7e-4950-b133-c3a27441c33a") : error deleting /var/lib/kubelet/pods/5f2d45f0-0b7e-4950-b133-c3a27441c33a/volume-subpaths: remove /var/lib/kubelet/pods/5f2d45f0-0b7e-4950-b133-c3a27441c33a/volume-subpaths: no such file or directory Dec 15 05:51:27 crc kubenswrapper[4747]: E1215 05:51:27.374302 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5f2d45f0-0b7e-4950-b133-c3a27441c33a-config podName:5f2d45f0-0b7e-4950-b133-c3a27441c33a nodeName:}" failed. No retries permitted until 2025-12-15 05:51:27.874281841 +0000 UTC m=+851.570793758 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config" (UniqueName: "kubernetes.io/configmap/5f2d45f0-0b7e-4950-b133-c3a27441c33a-config") pod "5f2d45f0-0b7e-4950-b133-c3a27441c33a" (UID: "5f2d45f0-0b7e-4950-b133-c3a27441c33a") : error deleting /var/lib/kubelet/pods/5f2d45f0-0b7e-4950-b133-c3a27441c33a/volume-subpaths: remove /var/lib/kubelet/pods/5f2d45f0-0b7e-4950-b133-c3a27441c33a/volume-subpaths: no such file or directory Dec 15 05:51:27 crc kubenswrapper[4747]: I1215 05:51:27.374555 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f2d45f0-0b7e-4950-b133-c3a27441c33a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5f2d45f0-0b7e-4950-b133-c3a27441c33a" (UID: "5f2d45f0-0b7e-4950-b133-c3a27441c33a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:51:27 crc kubenswrapper[4747]: I1215 05:51:27.440778 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mj8cr\" (UniqueName: \"kubernetes.io/projected/5f2d45f0-0b7e-4950-b133-c3a27441c33a-kube-api-access-mj8cr\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:27 crc kubenswrapper[4747]: I1215 05:51:27.440878 4747 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f2d45f0-0b7e-4950-b133-c3a27441c33a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:27 crc kubenswrapper[4747]: I1215 05:51:27.440973 4747 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f2d45f0-0b7e-4950-b133-c3a27441c33a-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:27 crc kubenswrapper[4747]: I1215 05:51:27.948185 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f2d45f0-0b7e-4950-b133-c3a27441c33a-config\") pod \"5f2d45f0-0b7e-4950-b133-c3a27441c33a\" (UID: \"5f2d45f0-0b7e-4950-b133-c3a27441c33a\") " Dec 15 05:51:27 crc kubenswrapper[4747]: I1215 05:51:27.948289 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f2d45f0-0b7e-4950-b133-c3a27441c33a-ovsdbserver-sb\") pod \"5f2d45f0-0b7e-4950-b133-c3a27441c33a\" (UID: \"5f2d45f0-0b7e-4950-b133-c3a27441c33a\") " Dec 15 05:51:27 crc kubenswrapper[4747]: I1215 05:51:27.948792 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f2d45f0-0b7e-4950-b133-c3a27441c33a-config" (OuterVolumeSpecName: "config") pod "5f2d45f0-0b7e-4950-b133-c3a27441c33a" (UID: "5f2d45f0-0b7e-4950-b133-c3a27441c33a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:51:27 crc kubenswrapper[4747]: I1215 05:51:27.948998 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f2d45f0-0b7e-4950-b133-c3a27441c33a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5f2d45f0-0b7e-4950-b133-c3a27441c33a" (UID: "5f2d45f0-0b7e-4950-b133-c3a27441c33a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.051159 4747 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f2d45f0-0b7e-4950-b133-c3a27441c33a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.051198 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f2d45f0-0b7e-4950-b133-c3a27441c33a-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.078583 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78d59ccb8c-t4s8l" event={"ID":"5f2d45f0-0b7e-4950-b133-c3a27441c33a","Type":"ContainerDied","Data":"30b2b3d735632c6b7d9db92f559f377696d924427d047f7ef89cce82efdca384"} Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.078599 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78d59ccb8c-t4s8l" Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.078671 4747 scope.go:117] "RemoveContainer" containerID="566a100f4badeff2fff0f1ef1d1a34c66253287bdcfa3a957050a731bcfc5850" Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.121024 4747 scope.go:117] "RemoveContainer" containerID="b59c40f6273e1d192504f921eae61618104788c40d889190035a482a657317fd" Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.122602 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78d59ccb8c-t4s8l"] Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.126166 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78d59ccb8c-t4s8l"] Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.359996 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-wwznk" Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.463708 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkgnf\" (UniqueName: \"kubernetes.io/projected/f8da0056-3ad7-49f6-857d-ad1710ecf088-kube-api-access-mkgnf\") pod \"f8da0056-3ad7-49f6-857d-ad1710ecf088\" (UID: \"f8da0056-3ad7-49f6-857d-ad1710ecf088\") " Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.464183 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8da0056-3ad7-49f6-857d-ad1710ecf088-operator-scripts\") pod \"f8da0056-3ad7-49f6-857d-ad1710ecf088\" (UID: \"f8da0056-3ad7-49f6-857d-ad1710ecf088\") " Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.465436 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8da0056-3ad7-49f6-857d-ad1710ecf088-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f8da0056-3ad7-49f6-857d-ad1710ecf088" (UID: "f8da0056-3ad7-49f6-857d-ad1710ecf088"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.473631 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8da0056-3ad7-49f6-857d-ad1710ecf088-kube-api-access-mkgnf" (OuterVolumeSpecName: "kube-api-access-mkgnf") pod "f8da0056-3ad7-49f6-857d-ad1710ecf088" (UID: "f8da0056-3ad7-49f6-857d-ad1710ecf088"). InnerVolumeSpecName "kube-api-access-mkgnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.499697 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-693d-account-create-update-87r57" Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.514139 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b906-account-create-update-5c6w9" Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.522986 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-b65n4-config-fnljr" Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.540171 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-z7r2t" Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.569743 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2f3b4638-4016-4edb-94e6-b624336018ce-additional-scripts\") pod \"2f3b4638-4016-4edb-94e6-b624336018ce\" (UID: \"2f3b4638-4016-4edb-94e6-b624336018ce\") " Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.569811 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h54vf\" (UniqueName: \"kubernetes.io/projected/9c319a7c-6074-41bf-9410-03fca21a603c-kube-api-access-h54vf\") pod \"9c319a7c-6074-41bf-9410-03fca21a603c\" (UID: \"9c319a7c-6074-41bf-9410-03fca21a603c\") " Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.569874 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f3b4638-4016-4edb-94e6-b624336018ce-scripts\") pod \"2f3b4638-4016-4edb-94e6-b624336018ce\" (UID: \"2f3b4638-4016-4edb-94e6-b624336018ce\") " Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.569905 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4rrn\" (UniqueName: \"kubernetes.io/projected/f339ea36-b8e2-4309-aeb3-aa8d4a2eb137-kube-api-access-h4rrn\") pod \"f339ea36-b8e2-4309-aeb3-aa8d4a2eb137\" (UID: \"f339ea36-b8e2-4309-aeb3-aa8d4a2eb137\") " Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.569952 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2f3b4638-4016-4edb-94e6-b624336018ce-var-run-ovn\") pod \"2f3b4638-4016-4edb-94e6-b624336018ce\" (UID: \"2f3b4638-4016-4edb-94e6-b624336018ce\") " Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.569977 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wjcr\" (UniqueName: \"kubernetes.io/projected/07706a57-9497-4ff4-8e6f-92c84e806c2a-kube-api-access-8wjcr\") pod \"07706a57-9497-4ff4-8e6f-92c84e806c2a\" (UID: \"07706a57-9497-4ff4-8e6f-92c84e806c2a\") " Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.570010 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c319a7c-6074-41bf-9410-03fca21a603c-operator-scripts\") pod \"9c319a7c-6074-41bf-9410-03fca21a603c\" (UID: \"9c319a7c-6074-41bf-9410-03fca21a603c\") " Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.570038 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f339ea36-b8e2-4309-aeb3-aa8d4a2eb137-operator-scripts\") pod \"f339ea36-b8e2-4309-aeb3-aa8d4a2eb137\" (UID: \"f339ea36-b8e2-4309-aeb3-aa8d4a2eb137\") " Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.570055 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2f3b4638-4016-4edb-94e6-b624336018ce-var-run\") pod \"2f3b4638-4016-4edb-94e6-b624336018ce\" (UID: \"2f3b4638-4016-4edb-94e6-b624336018ce\") " Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.570079 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rn2m8\" (UniqueName: \"kubernetes.io/projected/2f3b4638-4016-4edb-94e6-b624336018ce-kube-api-access-rn2m8\") pod \"2f3b4638-4016-4edb-94e6-b624336018ce\" (UID: \"2f3b4638-4016-4edb-94e6-b624336018ce\") " Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.570144 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07706a57-9497-4ff4-8e6f-92c84e806c2a-operator-scripts\") pod \"07706a57-9497-4ff4-8e6f-92c84e806c2a\" (UID: \"07706a57-9497-4ff4-8e6f-92c84e806c2a\") " Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.570203 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2f3b4638-4016-4edb-94e6-b624336018ce-var-log-ovn\") pod \"2f3b4638-4016-4edb-94e6-b624336018ce\" (UID: \"2f3b4638-4016-4edb-94e6-b624336018ce\") " Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.570225 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f3b4638-4016-4edb-94e6-b624336018ce-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "2f3b4638-4016-4edb-94e6-b624336018ce" (UID: "2f3b4638-4016-4edb-94e6-b624336018ce"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.570541 4747 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8da0056-3ad7-49f6-857d-ad1710ecf088-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.570563 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkgnf\" (UniqueName: \"kubernetes.io/projected/f8da0056-3ad7-49f6-857d-ad1710ecf088-kube-api-access-mkgnf\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.570577 4747 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2f3b4638-4016-4edb-94e6-b624336018ce-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.570608 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f3b4638-4016-4edb-94e6-b624336018ce-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "2f3b4638-4016-4edb-94e6-b624336018ce" (UID: "2f3b4638-4016-4edb-94e6-b624336018ce"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.571011 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f3b4638-4016-4edb-94e6-b624336018ce-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "2f3b4638-4016-4edb-94e6-b624336018ce" (UID: "2f3b4638-4016-4edb-94e6-b624336018ce"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.571221 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f3b4638-4016-4edb-94e6-b624336018ce-var-run" (OuterVolumeSpecName: "var-run") pod "2f3b4638-4016-4edb-94e6-b624336018ce" (UID: "2f3b4638-4016-4edb-94e6-b624336018ce"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.571698 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c319a7c-6074-41bf-9410-03fca21a603c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9c319a7c-6074-41bf-9410-03fca21a603c" (UID: "9c319a7c-6074-41bf-9410-03fca21a603c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.572161 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f339ea36-b8e2-4309-aeb3-aa8d4a2eb137-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f339ea36-b8e2-4309-aeb3-aa8d4a2eb137" (UID: "f339ea36-b8e2-4309-aeb3-aa8d4a2eb137"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.572356 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f3b4638-4016-4edb-94e6-b624336018ce-scripts" (OuterVolumeSpecName: "scripts") pod "2f3b4638-4016-4edb-94e6-b624336018ce" (UID: "2f3b4638-4016-4edb-94e6-b624336018ce"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.573730 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07706a57-9497-4ff4-8e6f-92c84e806c2a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "07706a57-9497-4ff4-8e6f-92c84e806c2a" (UID: "07706a57-9497-4ff4-8e6f-92c84e806c2a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.574712 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07706a57-9497-4ff4-8e6f-92c84e806c2a-kube-api-access-8wjcr" (OuterVolumeSpecName: "kube-api-access-8wjcr") pod "07706a57-9497-4ff4-8e6f-92c84e806c2a" (UID: "07706a57-9497-4ff4-8e6f-92c84e806c2a"). InnerVolumeSpecName "kube-api-access-8wjcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.574980 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f3b4638-4016-4edb-94e6-b624336018ce-kube-api-access-rn2m8" (OuterVolumeSpecName: "kube-api-access-rn2m8") pod "2f3b4638-4016-4edb-94e6-b624336018ce" (UID: "2f3b4638-4016-4edb-94e6-b624336018ce"). InnerVolumeSpecName "kube-api-access-rn2m8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.576382 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f339ea36-b8e2-4309-aeb3-aa8d4a2eb137-kube-api-access-h4rrn" (OuterVolumeSpecName: "kube-api-access-h4rrn") pod "f339ea36-b8e2-4309-aeb3-aa8d4a2eb137" (UID: "f339ea36-b8e2-4309-aeb3-aa8d4a2eb137"). InnerVolumeSpecName "kube-api-access-h4rrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.580364 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c319a7c-6074-41bf-9410-03fca21a603c-kube-api-access-h54vf" (OuterVolumeSpecName: "kube-api-access-h54vf") pod "9c319a7c-6074-41bf-9410-03fca21a603c" (UID: "9c319a7c-6074-41bf-9410-03fca21a603c"). InnerVolumeSpecName "kube-api-access-h54vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.640466 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f2d45f0-0b7e-4950-b133-c3a27441c33a" path="/var/lib/kubelet/pods/5f2d45f0-0b7e-4950-b133-c3a27441c33a/volumes" Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.671868 4747 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2f3b4638-4016-4edb-94e6-b624336018ce-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.671906 4747 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2f3b4638-4016-4edb-94e6-b624336018ce-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.671920 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h54vf\" (UniqueName: \"kubernetes.io/projected/9c319a7c-6074-41bf-9410-03fca21a603c-kube-api-access-h54vf\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.671950 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f3b4638-4016-4edb-94e6-b624336018ce-scripts\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.671962 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4rrn\" (UniqueName: \"kubernetes.io/projected/f339ea36-b8e2-4309-aeb3-aa8d4a2eb137-kube-api-access-h4rrn\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.671971 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wjcr\" (UniqueName: \"kubernetes.io/projected/07706a57-9497-4ff4-8e6f-92c84e806c2a-kube-api-access-8wjcr\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.671982 4747 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c319a7c-6074-41bf-9410-03fca21a603c-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.671991 4747 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f339ea36-b8e2-4309-aeb3-aa8d4a2eb137-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.671999 4747 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2f3b4638-4016-4edb-94e6-b624336018ce-var-run\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.672010 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rn2m8\" (UniqueName: \"kubernetes.io/projected/2f3b4638-4016-4edb-94e6-b624336018ce-kube-api-access-rn2m8\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.672018 4747 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07706a57-9497-4ff4-8e6f-92c84e806c2a-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.865182 4747 patch_prober.go:28] interesting pod/machine-config-daemon-nldtn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.865712 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 15 05:51:28 crc kubenswrapper[4747]: I1215 05:51:28.865811 4747 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.088799 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-b65n4-config-fnljr" event={"ID":"2f3b4638-4016-4edb-94e6-b624336018ce","Type":"ContainerDied","Data":"be6b4bfb6febf9e34ba393f7db5d8fc26ed21c460f370df50c55f2351aa55712"} Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.088842 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be6b4bfb6febf9e34ba393f7db5d8fc26ed21c460f370df50c55f2351aa55712" Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.088868 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-b65n4-config-fnljr" Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.090699 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-wwznk" event={"ID":"f8da0056-3ad7-49f6-857d-ad1710ecf088","Type":"ContainerDied","Data":"b1b4476d9aeeb604b0363fc31cde0e94af79369ec1d0eb0f01788f42bb343182"} Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.090752 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1b4476d9aeeb604b0363fc31cde0e94af79369ec1d0eb0f01788f42bb343182" Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.090816 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-wwznk" Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.094172 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-693d-account-create-update-87r57" event={"ID":"f339ea36-b8e2-4309-aeb3-aa8d4a2eb137","Type":"ContainerDied","Data":"0f4aacc898b16a4a9940833f4dc6723d6cd26974a0d7273fc0708e00c61ffac3"} Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.094205 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f4aacc898b16a4a9940833f4dc6723d6cd26974a0d7273fc0708e00c61ffac3" Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.094232 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-693d-account-create-update-87r57" Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.096543 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-z7r2t" Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.096564 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-z7r2t" event={"ID":"9c319a7c-6074-41bf-9410-03fca21a603c","Type":"ContainerDied","Data":"828dc4e47ed2d2c95faa60b1e2bfb72fc1b85570afd705506c68702bdfe4732e"} Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.096612 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="828dc4e47ed2d2c95faa60b1e2bfb72fc1b85570afd705506c68702bdfe4732e" Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.100646 4747 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1f6be68cbfc9d5eee88cda586fa59c68181c75ecba41c64c7ee60c7ad6d664b8"} pod="openshift-machine-config-operator/machine-config-daemon-nldtn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.100714 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" containerID="cri-o://1f6be68cbfc9d5eee88cda586fa59c68181c75ecba41c64c7ee60c7ad6d664b8" gracePeriod=600 Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.100887 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b906-account-create-update-5c6w9" Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.101325 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b906-account-create-update-5c6w9" event={"ID":"07706a57-9497-4ff4-8e6f-92c84e806c2a","Type":"ContainerDied","Data":"95ec5aa058dd3ca9341850d62229fe300ff32a15619d3b0ab80cf2220f286baa"} Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.101346 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95ec5aa058dd3ca9341850d62229fe300ff32a15619d3b0ab80cf2220f286baa" Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.627921 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-b65n4-config-fnljr"] Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.644097 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-b65n4-config-fnljr"] Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.680269 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-b65n4-config-qgcwd"] Dec 15 05:51:29 crc kubenswrapper[4747]: E1215 05:51:29.681369 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f2d45f0-0b7e-4950-b133-c3a27441c33a" containerName="dnsmasq-dns" Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.681459 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f2d45f0-0b7e-4950-b133-c3a27441c33a" containerName="dnsmasq-dns" Dec 15 05:51:29 crc kubenswrapper[4747]: E1215 05:51:29.681553 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f339ea36-b8e2-4309-aeb3-aa8d4a2eb137" containerName="mariadb-account-create-update" Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.681604 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="f339ea36-b8e2-4309-aeb3-aa8d4a2eb137" containerName="mariadb-account-create-update" Dec 15 05:51:29 crc kubenswrapper[4747]: E1215 05:51:29.681671 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f2d45f0-0b7e-4950-b133-c3a27441c33a" containerName="init" Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.681720 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f2d45f0-0b7e-4950-b133-c3a27441c33a" containerName="init" Dec 15 05:51:29 crc kubenswrapper[4747]: E1215 05:51:29.681782 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c319a7c-6074-41bf-9410-03fca21a603c" containerName="mariadb-database-create" Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.681829 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c319a7c-6074-41bf-9410-03fca21a603c" containerName="mariadb-database-create" Dec 15 05:51:29 crc kubenswrapper[4747]: E1215 05:51:29.681892 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f3b4638-4016-4edb-94e6-b624336018ce" containerName="ovn-config" Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.681956 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f3b4638-4016-4edb-94e6-b624336018ce" containerName="ovn-config" Dec 15 05:51:29 crc kubenswrapper[4747]: E1215 05:51:29.682023 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8da0056-3ad7-49f6-857d-ad1710ecf088" containerName="mariadb-database-create" Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.682072 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8da0056-3ad7-49f6-857d-ad1710ecf088" containerName="mariadb-database-create" Dec 15 05:51:29 crc kubenswrapper[4747]: E1215 05:51:29.682137 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07706a57-9497-4ff4-8e6f-92c84e806c2a" containerName="mariadb-account-create-update" Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.682202 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="07706a57-9497-4ff4-8e6f-92c84e806c2a" containerName="mariadb-account-create-update" Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.682461 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c319a7c-6074-41bf-9410-03fca21a603c" containerName="mariadb-database-create" Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.682528 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="f339ea36-b8e2-4309-aeb3-aa8d4a2eb137" containerName="mariadb-account-create-update" Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.682598 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8da0056-3ad7-49f6-857d-ad1710ecf088" containerName="mariadb-database-create" Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.682657 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="07706a57-9497-4ff4-8e6f-92c84e806c2a" containerName="mariadb-account-create-update" Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.682716 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f2d45f0-0b7e-4950-b133-c3a27441c33a" containerName="dnsmasq-dns" Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.682774 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f3b4638-4016-4edb-94e6-b624336018ce" containerName="ovn-config" Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.683640 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-b65n4-config-qgcwd" Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.688460 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-b65n4-config-qgcwd"] Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.693217 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e9647818-0dd1-486a-aa18-3e90baa2e679-var-run\") pod \"ovn-controller-b65n4-config-qgcwd\" (UID: \"e9647818-0dd1-486a-aa18-3e90baa2e679\") " pod="openstack/ovn-controller-b65n4-config-qgcwd" Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.693287 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g2zl\" (UniqueName: \"kubernetes.io/projected/e9647818-0dd1-486a-aa18-3e90baa2e679-kube-api-access-6g2zl\") pod \"ovn-controller-b65n4-config-qgcwd\" (UID: \"e9647818-0dd1-486a-aa18-3e90baa2e679\") " pod="openstack/ovn-controller-b65n4-config-qgcwd" Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.693336 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e9647818-0dd1-486a-aa18-3e90baa2e679-var-run-ovn\") pod \"ovn-controller-b65n4-config-qgcwd\" (UID: \"e9647818-0dd1-486a-aa18-3e90baa2e679\") " pod="openstack/ovn-controller-b65n4-config-qgcwd" Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.693358 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e9647818-0dd1-486a-aa18-3e90baa2e679-var-log-ovn\") pod \"ovn-controller-b65n4-config-qgcwd\" (UID: \"e9647818-0dd1-486a-aa18-3e90baa2e679\") " pod="openstack/ovn-controller-b65n4-config-qgcwd" Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.693479 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e9647818-0dd1-486a-aa18-3e90baa2e679-scripts\") pod \"ovn-controller-b65n4-config-qgcwd\" (UID: \"e9647818-0dd1-486a-aa18-3e90baa2e679\") " pod="openstack/ovn-controller-b65n4-config-qgcwd" Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.693607 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e9647818-0dd1-486a-aa18-3e90baa2e679-additional-scripts\") pod \"ovn-controller-b65n4-config-qgcwd\" (UID: \"e9647818-0dd1-486a-aa18-3e90baa2e679\") " pod="openstack/ovn-controller-b65n4-config-qgcwd" Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.693917 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.795121 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6g2zl\" (UniqueName: \"kubernetes.io/projected/e9647818-0dd1-486a-aa18-3e90baa2e679-kube-api-access-6g2zl\") pod \"ovn-controller-b65n4-config-qgcwd\" (UID: \"e9647818-0dd1-486a-aa18-3e90baa2e679\") " pod="openstack/ovn-controller-b65n4-config-qgcwd" Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.795209 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e9647818-0dd1-486a-aa18-3e90baa2e679-var-run-ovn\") pod \"ovn-controller-b65n4-config-qgcwd\" (UID: \"e9647818-0dd1-486a-aa18-3e90baa2e679\") " pod="openstack/ovn-controller-b65n4-config-qgcwd" Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.795235 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e9647818-0dd1-486a-aa18-3e90baa2e679-var-log-ovn\") pod \"ovn-controller-b65n4-config-qgcwd\" (UID: \"e9647818-0dd1-486a-aa18-3e90baa2e679\") " pod="openstack/ovn-controller-b65n4-config-qgcwd" Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.795329 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e9647818-0dd1-486a-aa18-3e90baa2e679-scripts\") pod \"ovn-controller-b65n4-config-qgcwd\" (UID: \"e9647818-0dd1-486a-aa18-3e90baa2e679\") " pod="openstack/ovn-controller-b65n4-config-qgcwd" Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.795420 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e9647818-0dd1-486a-aa18-3e90baa2e679-additional-scripts\") pod \"ovn-controller-b65n4-config-qgcwd\" (UID: \"e9647818-0dd1-486a-aa18-3e90baa2e679\") " pod="openstack/ovn-controller-b65n4-config-qgcwd" Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.795485 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e9647818-0dd1-486a-aa18-3e90baa2e679-var-run\") pod \"ovn-controller-b65n4-config-qgcwd\" (UID: \"e9647818-0dd1-486a-aa18-3e90baa2e679\") " pod="openstack/ovn-controller-b65n4-config-qgcwd" Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.795811 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e9647818-0dd1-486a-aa18-3e90baa2e679-var-log-ovn\") pod \"ovn-controller-b65n4-config-qgcwd\" (UID: \"e9647818-0dd1-486a-aa18-3e90baa2e679\") " pod="openstack/ovn-controller-b65n4-config-qgcwd" Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.795830 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e9647818-0dd1-486a-aa18-3e90baa2e679-var-run-ovn\") pod \"ovn-controller-b65n4-config-qgcwd\" (UID: \"e9647818-0dd1-486a-aa18-3e90baa2e679\") " pod="openstack/ovn-controller-b65n4-config-qgcwd" Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.795833 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e9647818-0dd1-486a-aa18-3e90baa2e679-var-run\") pod \"ovn-controller-b65n4-config-qgcwd\" (UID: \"e9647818-0dd1-486a-aa18-3e90baa2e679\") " pod="openstack/ovn-controller-b65n4-config-qgcwd" Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.796752 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e9647818-0dd1-486a-aa18-3e90baa2e679-additional-scripts\") pod \"ovn-controller-b65n4-config-qgcwd\" (UID: \"e9647818-0dd1-486a-aa18-3e90baa2e679\") " pod="openstack/ovn-controller-b65n4-config-qgcwd" Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.798080 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e9647818-0dd1-486a-aa18-3e90baa2e679-scripts\") pod \"ovn-controller-b65n4-config-qgcwd\" (UID: \"e9647818-0dd1-486a-aa18-3e90baa2e679\") " pod="openstack/ovn-controller-b65n4-config-qgcwd" Dec 15 05:51:29 crc kubenswrapper[4747]: I1215 05:51:29.814508 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6g2zl\" (UniqueName: \"kubernetes.io/projected/e9647818-0dd1-486a-aa18-3e90baa2e679-kube-api-access-6g2zl\") pod \"ovn-controller-b65n4-config-qgcwd\" (UID: \"e9647818-0dd1-486a-aa18-3e90baa2e679\") " pod="openstack/ovn-controller-b65n4-config-qgcwd" Dec 15 05:51:30 crc kubenswrapper[4747]: I1215 05:51:30.008509 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-b65n4-config-qgcwd" Dec 15 05:51:30 crc kubenswrapper[4747]: I1215 05:51:30.119554 4747 generic.go:334] "Generic (PLEG): container finished" podID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerID="1f6be68cbfc9d5eee88cda586fa59c68181c75ecba41c64c7ee60c7ad6d664b8" exitCode=0 Dec 15 05:51:30 crc kubenswrapper[4747]: I1215 05:51:30.119621 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" event={"ID":"1d50e5c9-7ce9-40c0-b942-01031654d27c","Type":"ContainerDied","Data":"1f6be68cbfc9d5eee88cda586fa59c68181c75ecba41c64c7ee60c7ad6d664b8"} Dec 15 05:51:30 crc kubenswrapper[4747]: I1215 05:51:30.119717 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" event={"ID":"1d50e5c9-7ce9-40c0-b942-01031654d27c","Type":"ContainerStarted","Data":"90f12c1fab813a5975dd1bb7980ef75e3315dc4c893a83d1630f8dfbea3891d6"} Dec 15 05:51:30 crc kubenswrapper[4747]: I1215 05:51:30.119758 4747 scope.go:117] "RemoveContainer" containerID="a17943ccf4995eb4ff240ba732355ee9e9020e929a2275df58776bf83d66a3b3" Dec 15 05:51:30 crc kubenswrapper[4747]: I1215 05:51:30.433526 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-b65n4-config-qgcwd"] Dec 15 05:51:30 crc kubenswrapper[4747]: I1215 05:51:30.457180 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-bw8hg"] Dec 15 05:51:30 crc kubenswrapper[4747]: I1215 05:51:30.458284 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bw8hg" Dec 15 05:51:30 crc kubenswrapper[4747]: I1215 05:51:30.460492 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Dec 15 05:51:30 crc kubenswrapper[4747]: I1215 05:51:30.460902 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-q89ks" Dec 15 05:51:30 crc kubenswrapper[4747]: I1215 05:51:30.465203 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-bw8hg"] Dec 15 05:51:30 crc kubenswrapper[4747]: I1215 05:51:30.509996 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fd62cda8-66b6-4fe4-976d-0723d296a262-db-sync-config-data\") pod \"glance-db-sync-bw8hg\" (UID: \"fd62cda8-66b6-4fe4-976d-0723d296a262\") " pod="openstack/glance-db-sync-bw8hg" Dec 15 05:51:30 crc kubenswrapper[4747]: I1215 05:51:30.510103 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd62cda8-66b6-4fe4-976d-0723d296a262-combined-ca-bundle\") pod \"glance-db-sync-bw8hg\" (UID: \"fd62cda8-66b6-4fe4-976d-0723d296a262\") " pod="openstack/glance-db-sync-bw8hg" Dec 15 05:51:30 crc kubenswrapper[4747]: I1215 05:51:30.510274 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svj4c\" (UniqueName: \"kubernetes.io/projected/fd62cda8-66b6-4fe4-976d-0723d296a262-kube-api-access-svj4c\") pod \"glance-db-sync-bw8hg\" (UID: \"fd62cda8-66b6-4fe4-976d-0723d296a262\") " pod="openstack/glance-db-sync-bw8hg" Dec 15 05:51:30 crc kubenswrapper[4747]: I1215 05:51:30.510348 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd62cda8-66b6-4fe4-976d-0723d296a262-config-data\") pod \"glance-db-sync-bw8hg\" (UID: \"fd62cda8-66b6-4fe4-976d-0723d296a262\") " pod="openstack/glance-db-sync-bw8hg" Dec 15 05:51:30 crc kubenswrapper[4747]: I1215 05:51:30.611642 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fd62cda8-66b6-4fe4-976d-0723d296a262-db-sync-config-data\") pod \"glance-db-sync-bw8hg\" (UID: \"fd62cda8-66b6-4fe4-976d-0723d296a262\") " pod="openstack/glance-db-sync-bw8hg" Dec 15 05:51:30 crc kubenswrapper[4747]: I1215 05:51:30.611714 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd62cda8-66b6-4fe4-976d-0723d296a262-combined-ca-bundle\") pod \"glance-db-sync-bw8hg\" (UID: \"fd62cda8-66b6-4fe4-976d-0723d296a262\") " pod="openstack/glance-db-sync-bw8hg" Dec 15 05:51:30 crc kubenswrapper[4747]: I1215 05:51:30.611800 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svj4c\" (UniqueName: \"kubernetes.io/projected/fd62cda8-66b6-4fe4-976d-0723d296a262-kube-api-access-svj4c\") pod \"glance-db-sync-bw8hg\" (UID: \"fd62cda8-66b6-4fe4-976d-0723d296a262\") " pod="openstack/glance-db-sync-bw8hg" Dec 15 05:51:30 crc kubenswrapper[4747]: I1215 05:51:30.611850 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd62cda8-66b6-4fe4-976d-0723d296a262-config-data\") pod \"glance-db-sync-bw8hg\" (UID: \"fd62cda8-66b6-4fe4-976d-0723d296a262\") " pod="openstack/glance-db-sync-bw8hg" Dec 15 05:51:30 crc kubenswrapper[4747]: I1215 05:51:30.617352 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fd62cda8-66b6-4fe4-976d-0723d296a262-db-sync-config-data\") pod \"glance-db-sync-bw8hg\" (UID: \"fd62cda8-66b6-4fe4-976d-0723d296a262\") " pod="openstack/glance-db-sync-bw8hg" Dec 15 05:51:30 crc kubenswrapper[4747]: I1215 05:51:30.617388 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd62cda8-66b6-4fe4-976d-0723d296a262-config-data\") pod \"glance-db-sync-bw8hg\" (UID: \"fd62cda8-66b6-4fe4-976d-0723d296a262\") " pod="openstack/glance-db-sync-bw8hg" Dec 15 05:51:30 crc kubenswrapper[4747]: I1215 05:51:30.617398 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd62cda8-66b6-4fe4-976d-0723d296a262-combined-ca-bundle\") pod \"glance-db-sync-bw8hg\" (UID: \"fd62cda8-66b6-4fe4-976d-0723d296a262\") " pod="openstack/glance-db-sync-bw8hg" Dec 15 05:51:30 crc kubenswrapper[4747]: I1215 05:51:30.627341 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svj4c\" (UniqueName: \"kubernetes.io/projected/fd62cda8-66b6-4fe4-976d-0723d296a262-kube-api-access-svj4c\") pod \"glance-db-sync-bw8hg\" (UID: \"fd62cda8-66b6-4fe4-976d-0723d296a262\") " pod="openstack/glance-db-sync-bw8hg" Dec 15 05:51:30 crc kubenswrapper[4747]: I1215 05:51:30.637545 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f3b4638-4016-4edb-94e6-b624336018ce" path="/var/lib/kubelet/pods/2f3b4638-4016-4edb-94e6-b624336018ce/volumes" Dec 15 05:51:30 crc kubenswrapper[4747]: I1215 05:51:30.779982 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bw8hg" Dec 15 05:51:31 crc kubenswrapper[4747]: I1215 05:51:31.131053 4747 generic.go:334] "Generic (PLEG): container finished" podID="e9647818-0dd1-486a-aa18-3e90baa2e679" containerID="0e878fe8d8975fe2f99daff6abc119591b283a5216a74fb7c9901edd89a0d656" exitCode=0 Dec 15 05:51:31 crc kubenswrapper[4747]: I1215 05:51:31.131457 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-b65n4-config-qgcwd" event={"ID":"e9647818-0dd1-486a-aa18-3e90baa2e679","Type":"ContainerDied","Data":"0e878fe8d8975fe2f99daff6abc119591b283a5216a74fb7c9901edd89a0d656"} Dec 15 05:51:31 crc kubenswrapper[4747]: I1215 05:51:31.131509 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-b65n4-config-qgcwd" event={"ID":"e9647818-0dd1-486a-aa18-3e90baa2e679","Type":"ContainerStarted","Data":"a7d9ea1e94b6ae4063027d5353b684d9da5d97dedc93d53dedc05a61e804262f"} Dec 15 05:51:31 crc kubenswrapper[4747]: I1215 05:51:31.253115 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-bw8hg"] Dec 15 05:51:31 crc kubenswrapper[4747]: W1215 05:51:31.256460 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd62cda8_66b6_4fe4_976d_0723d296a262.slice/crio-435840b0c20f593a3128e526e713de56a7b47d8900f13325f92a05373f46c51a WatchSource:0}: Error finding container 435840b0c20f593a3128e526e713de56a7b47d8900f13325f92a05373f46c51a: Status 404 returned error can't find the container with id 435840b0c20f593a3128e526e713de56a7b47d8900f13325f92a05373f46c51a Dec 15 05:51:32 crc kubenswrapper[4747]: I1215 05:51:32.145333 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bw8hg" event={"ID":"fd62cda8-66b6-4fe4-976d-0723d296a262","Type":"ContainerStarted","Data":"435840b0c20f593a3128e526e713de56a7b47d8900f13325f92a05373f46c51a"} Dec 15 05:51:32 crc kubenswrapper[4747]: I1215 05:51:32.147847 4747 generic.go:334] "Generic (PLEG): container finished" podID="6a5299e8-666f-431f-9ecc-5dcc74352e38" containerID="9fa7b729d0f4786d744fc20946a2d28b5a61b6aa27d7144aebda476fa95aafd7" exitCode=0 Dec 15 05:51:32 crc kubenswrapper[4747]: I1215 05:51:32.148169 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-f2kl2" event={"ID":"6a5299e8-666f-431f-9ecc-5dcc74352e38","Type":"ContainerDied","Data":"9fa7b729d0f4786d744fc20946a2d28b5a61b6aa27d7144aebda476fa95aafd7"} Dec 15 05:51:32 crc kubenswrapper[4747]: I1215 05:51:32.420807 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-b65n4-config-qgcwd" Dec 15 05:51:32 crc kubenswrapper[4747]: I1215 05:51:32.458365 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g2zl\" (UniqueName: \"kubernetes.io/projected/e9647818-0dd1-486a-aa18-3e90baa2e679-kube-api-access-6g2zl\") pod \"e9647818-0dd1-486a-aa18-3e90baa2e679\" (UID: \"e9647818-0dd1-486a-aa18-3e90baa2e679\") " Dec 15 05:51:32 crc kubenswrapper[4747]: I1215 05:51:32.458420 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e9647818-0dd1-486a-aa18-3e90baa2e679-var-run\") pod \"e9647818-0dd1-486a-aa18-3e90baa2e679\" (UID: \"e9647818-0dd1-486a-aa18-3e90baa2e679\") " Dec 15 05:51:32 crc kubenswrapper[4747]: I1215 05:51:32.458537 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9647818-0dd1-486a-aa18-3e90baa2e679-var-run" (OuterVolumeSpecName: "var-run") pod "e9647818-0dd1-486a-aa18-3e90baa2e679" (UID: "e9647818-0dd1-486a-aa18-3e90baa2e679"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 15 05:51:32 crc kubenswrapper[4747]: I1215 05:51:32.458608 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e9647818-0dd1-486a-aa18-3e90baa2e679-additional-scripts\") pod \"e9647818-0dd1-486a-aa18-3e90baa2e679\" (UID: \"e9647818-0dd1-486a-aa18-3e90baa2e679\") " Dec 15 05:51:32 crc kubenswrapper[4747]: I1215 05:51:32.459684 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9647818-0dd1-486a-aa18-3e90baa2e679-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "e9647818-0dd1-486a-aa18-3e90baa2e679" (UID: "e9647818-0dd1-486a-aa18-3e90baa2e679"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:51:32 crc kubenswrapper[4747]: I1215 05:51:32.459723 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e9647818-0dd1-486a-aa18-3e90baa2e679-scripts\") pod \"e9647818-0dd1-486a-aa18-3e90baa2e679\" (UID: \"e9647818-0dd1-486a-aa18-3e90baa2e679\") " Dec 15 05:51:32 crc kubenswrapper[4747]: I1215 05:51:32.459895 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e9647818-0dd1-486a-aa18-3e90baa2e679-var-log-ovn\") pod \"e9647818-0dd1-486a-aa18-3e90baa2e679\" (UID: \"e9647818-0dd1-486a-aa18-3e90baa2e679\") " Dec 15 05:51:32 crc kubenswrapper[4747]: I1215 05:51:32.459978 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9647818-0dd1-486a-aa18-3e90baa2e679-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "e9647818-0dd1-486a-aa18-3e90baa2e679" (UID: "e9647818-0dd1-486a-aa18-3e90baa2e679"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 15 05:51:32 crc kubenswrapper[4747]: I1215 05:51:32.459976 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9647818-0dd1-486a-aa18-3e90baa2e679-scripts" (OuterVolumeSpecName: "scripts") pod "e9647818-0dd1-486a-aa18-3e90baa2e679" (UID: "e9647818-0dd1-486a-aa18-3e90baa2e679"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:51:32 crc kubenswrapper[4747]: I1215 05:51:32.460071 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e9647818-0dd1-486a-aa18-3e90baa2e679-var-run-ovn\") pod \"e9647818-0dd1-486a-aa18-3e90baa2e679\" (UID: \"e9647818-0dd1-486a-aa18-3e90baa2e679\") " Dec 15 05:51:32 crc kubenswrapper[4747]: I1215 05:51:32.460250 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9647818-0dd1-486a-aa18-3e90baa2e679-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "e9647818-0dd1-486a-aa18-3e90baa2e679" (UID: "e9647818-0dd1-486a-aa18-3e90baa2e679"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 15 05:51:32 crc kubenswrapper[4747]: I1215 05:51:32.460813 4747 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e9647818-0dd1-486a-aa18-3e90baa2e679-var-run\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:32 crc kubenswrapper[4747]: I1215 05:51:32.460834 4747 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e9647818-0dd1-486a-aa18-3e90baa2e679-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:32 crc kubenswrapper[4747]: I1215 05:51:32.460847 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e9647818-0dd1-486a-aa18-3e90baa2e679-scripts\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:32 crc kubenswrapper[4747]: I1215 05:51:32.460856 4747 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e9647818-0dd1-486a-aa18-3e90baa2e679-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:32 crc kubenswrapper[4747]: I1215 05:51:32.460865 4747 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e9647818-0dd1-486a-aa18-3e90baa2e679-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:32 crc kubenswrapper[4747]: I1215 05:51:32.464505 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9647818-0dd1-486a-aa18-3e90baa2e679-kube-api-access-6g2zl" (OuterVolumeSpecName: "kube-api-access-6g2zl") pod "e9647818-0dd1-486a-aa18-3e90baa2e679" (UID: "e9647818-0dd1-486a-aa18-3e90baa2e679"). InnerVolumeSpecName "kube-api-access-6g2zl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:51:32 crc kubenswrapper[4747]: I1215 05:51:32.563910 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g2zl\" (UniqueName: \"kubernetes.io/projected/e9647818-0dd1-486a-aa18-3e90baa2e679-kube-api-access-6g2zl\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.163129 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-b65n4-config-qgcwd" Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.163128 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-b65n4-config-qgcwd" event={"ID":"e9647818-0dd1-486a-aa18-3e90baa2e679","Type":"ContainerDied","Data":"a7d9ea1e94b6ae4063027d5353b684d9da5d97dedc93d53dedc05a61e804262f"} Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.163703 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7d9ea1e94b6ae4063027d5353b684d9da5d97dedc93d53dedc05a61e804262f" Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.430818 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-f2kl2" Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.484196 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a5299e8-666f-431f-9ecc-5dcc74352e38-combined-ca-bundle\") pod \"6a5299e8-666f-431f-9ecc-5dcc74352e38\" (UID: \"6a5299e8-666f-431f-9ecc-5dcc74352e38\") " Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.484538 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6a5299e8-666f-431f-9ecc-5dcc74352e38-ring-data-devices\") pod \"6a5299e8-666f-431f-9ecc-5dcc74352e38\" (UID: \"6a5299e8-666f-431f-9ecc-5dcc74352e38\") " Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.484572 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6a5299e8-666f-431f-9ecc-5dcc74352e38-swiftconf\") pod \"6a5299e8-666f-431f-9ecc-5dcc74352e38\" (UID: \"6a5299e8-666f-431f-9ecc-5dcc74352e38\") " Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.485115 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6a5299e8-666f-431f-9ecc-5dcc74352e38-etc-swift\") pod \"6a5299e8-666f-431f-9ecc-5dcc74352e38\" (UID: \"6a5299e8-666f-431f-9ecc-5dcc74352e38\") " Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.485172 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6a5299e8-666f-431f-9ecc-5dcc74352e38-scripts\") pod \"6a5299e8-666f-431f-9ecc-5dcc74352e38\" (UID: \"6a5299e8-666f-431f-9ecc-5dcc74352e38\") " Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.485207 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7d526\" (UniqueName: \"kubernetes.io/projected/6a5299e8-666f-431f-9ecc-5dcc74352e38-kube-api-access-7d526\") pod \"6a5299e8-666f-431f-9ecc-5dcc74352e38\" (UID: \"6a5299e8-666f-431f-9ecc-5dcc74352e38\") " Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.485256 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6a5299e8-666f-431f-9ecc-5dcc74352e38-dispersionconf\") pod \"6a5299e8-666f-431f-9ecc-5dcc74352e38\" (UID: \"6a5299e8-666f-431f-9ecc-5dcc74352e38\") " Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.485610 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a5299e8-666f-431f-9ecc-5dcc74352e38-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "6a5299e8-666f-431f-9ecc-5dcc74352e38" (UID: "6a5299e8-666f-431f-9ecc-5dcc74352e38"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.485642 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a5299e8-666f-431f-9ecc-5dcc74352e38-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "6a5299e8-666f-431f-9ecc-5dcc74352e38" (UID: "6a5299e8-666f-431f-9ecc-5dcc74352e38"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.486229 4747 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6a5299e8-666f-431f-9ecc-5dcc74352e38-ring-data-devices\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.486258 4747 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6a5299e8-666f-431f-9ecc-5dcc74352e38-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.494236 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a5299e8-666f-431f-9ecc-5dcc74352e38-kube-api-access-7d526" (OuterVolumeSpecName: "kube-api-access-7d526") pod "6a5299e8-666f-431f-9ecc-5dcc74352e38" (UID: "6a5299e8-666f-431f-9ecc-5dcc74352e38"). InnerVolumeSpecName "kube-api-access-7d526". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.494338 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-b65n4-config-qgcwd"] Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.496231 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a5299e8-666f-431f-9ecc-5dcc74352e38-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "6a5299e8-666f-431f-9ecc-5dcc74352e38" (UID: "6a5299e8-666f-431f-9ecc-5dcc74352e38"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.514331 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-b65n4-config-qgcwd"] Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.514356 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a5299e8-666f-431f-9ecc-5dcc74352e38-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a5299e8-666f-431f-9ecc-5dcc74352e38" (UID: "6a5299e8-666f-431f-9ecc-5dcc74352e38"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.522418 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a5299e8-666f-431f-9ecc-5dcc74352e38-scripts" (OuterVolumeSpecName: "scripts") pod "6a5299e8-666f-431f-9ecc-5dcc74352e38" (UID: "6a5299e8-666f-431f-9ecc-5dcc74352e38"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.522740 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a5299e8-666f-431f-9ecc-5dcc74352e38-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "6a5299e8-666f-431f-9ecc-5dcc74352e38" (UID: "6a5299e8-666f-431f-9ecc-5dcc74352e38"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.588421 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/08c6df63-e1b2-4194-9bbe-b07410de16e7-etc-swift\") pod \"swift-storage-0\" (UID: \"08c6df63-e1b2-4194-9bbe-b07410de16e7\") " pod="openstack/swift-storage-0" Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.588514 4747 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6a5299e8-666f-431f-9ecc-5dcc74352e38-swiftconf\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.588528 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6a5299e8-666f-431f-9ecc-5dcc74352e38-scripts\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.588541 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7d526\" (UniqueName: \"kubernetes.io/projected/6a5299e8-666f-431f-9ecc-5dcc74352e38-kube-api-access-7d526\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.588552 4747 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6a5299e8-666f-431f-9ecc-5dcc74352e38-dispersionconf\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.588561 4747 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a5299e8-666f-431f-9ecc-5dcc74352e38-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.593110 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/08c6df63-e1b2-4194-9bbe-b07410de16e7-etc-swift\") pod \"swift-storage-0\" (UID: \"08c6df63-e1b2-4194-9bbe-b07410de16e7\") " pod="openstack/swift-storage-0" Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.611411 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-b65n4-config-t7qcz"] Dec 15 05:51:33 crc kubenswrapper[4747]: E1215 05:51:33.612135 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a5299e8-666f-431f-9ecc-5dcc74352e38" containerName="swift-ring-rebalance" Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.612181 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a5299e8-666f-431f-9ecc-5dcc74352e38" containerName="swift-ring-rebalance" Dec 15 05:51:33 crc kubenswrapper[4747]: E1215 05:51:33.612220 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9647818-0dd1-486a-aa18-3e90baa2e679" containerName="ovn-config" Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.612227 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9647818-0dd1-486a-aa18-3e90baa2e679" containerName="ovn-config" Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.612497 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9647818-0dd1-486a-aa18-3e90baa2e679" containerName="ovn-config" Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.612512 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a5299e8-666f-431f-9ecc-5dcc74352e38" containerName="swift-ring-rebalance" Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.613456 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-b65n4-config-t7qcz" Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.616798 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.627373 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-b65n4-config-t7qcz"] Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.690460 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3e372997-1846-4bb4-9e88-5cb7d88bd30b-var-run\") pod \"ovn-controller-b65n4-config-t7qcz\" (UID: \"3e372997-1846-4bb4-9e88-5cb7d88bd30b\") " pod="openstack/ovn-controller-b65n4-config-t7qcz" Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.690605 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqkxc\" (UniqueName: \"kubernetes.io/projected/3e372997-1846-4bb4-9e88-5cb7d88bd30b-kube-api-access-zqkxc\") pod \"ovn-controller-b65n4-config-t7qcz\" (UID: \"3e372997-1846-4bb4-9e88-5cb7d88bd30b\") " pod="openstack/ovn-controller-b65n4-config-t7qcz" Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.690876 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3e372997-1846-4bb4-9e88-5cb7d88bd30b-additional-scripts\") pod \"ovn-controller-b65n4-config-t7qcz\" (UID: \"3e372997-1846-4bb4-9e88-5cb7d88bd30b\") " pod="openstack/ovn-controller-b65n4-config-t7qcz" Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.691811 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3e372997-1846-4bb4-9e88-5cb7d88bd30b-var-log-ovn\") pod \"ovn-controller-b65n4-config-t7qcz\" (UID: \"3e372997-1846-4bb4-9e88-5cb7d88bd30b\") " pod="openstack/ovn-controller-b65n4-config-t7qcz" Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.691863 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e372997-1846-4bb4-9e88-5cb7d88bd30b-var-run-ovn\") pod \"ovn-controller-b65n4-config-t7qcz\" (UID: \"3e372997-1846-4bb4-9e88-5cb7d88bd30b\") " pod="openstack/ovn-controller-b65n4-config-t7qcz" Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.691996 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e372997-1846-4bb4-9e88-5cb7d88bd30b-scripts\") pod \"ovn-controller-b65n4-config-t7qcz\" (UID: \"3e372997-1846-4bb4-9e88-5cb7d88bd30b\") " pod="openstack/ovn-controller-b65n4-config-t7qcz" Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.793614 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e372997-1846-4bb4-9e88-5cb7d88bd30b-scripts\") pod \"ovn-controller-b65n4-config-t7qcz\" (UID: \"3e372997-1846-4bb4-9e88-5cb7d88bd30b\") " pod="openstack/ovn-controller-b65n4-config-t7qcz" Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.793700 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3e372997-1846-4bb4-9e88-5cb7d88bd30b-var-run\") pod \"ovn-controller-b65n4-config-t7qcz\" (UID: \"3e372997-1846-4bb4-9e88-5cb7d88bd30b\") " pod="openstack/ovn-controller-b65n4-config-t7qcz" Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.793761 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqkxc\" (UniqueName: \"kubernetes.io/projected/3e372997-1846-4bb4-9e88-5cb7d88bd30b-kube-api-access-zqkxc\") pod \"ovn-controller-b65n4-config-t7qcz\" (UID: \"3e372997-1846-4bb4-9e88-5cb7d88bd30b\") " pod="openstack/ovn-controller-b65n4-config-t7qcz" Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.793871 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3e372997-1846-4bb4-9e88-5cb7d88bd30b-additional-scripts\") pod \"ovn-controller-b65n4-config-t7qcz\" (UID: \"3e372997-1846-4bb4-9e88-5cb7d88bd30b\") " pod="openstack/ovn-controller-b65n4-config-t7qcz" Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.793907 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3e372997-1846-4bb4-9e88-5cb7d88bd30b-var-log-ovn\") pod \"ovn-controller-b65n4-config-t7qcz\" (UID: \"3e372997-1846-4bb4-9e88-5cb7d88bd30b\") " pod="openstack/ovn-controller-b65n4-config-t7qcz" Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.793946 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e372997-1846-4bb4-9e88-5cb7d88bd30b-var-run-ovn\") pod \"ovn-controller-b65n4-config-t7qcz\" (UID: \"3e372997-1846-4bb4-9e88-5cb7d88bd30b\") " pod="openstack/ovn-controller-b65n4-config-t7qcz" Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.794296 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e372997-1846-4bb4-9e88-5cb7d88bd30b-var-run-ovn\") pod \"ovn-controller-b65n4-config-t7qcz\" (UID: \"3e372997-1846-4bb4-9e88-5cb7d88bd30b\") " pod="openstack/ovn-controller-b65n4-config-t7qcz" Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.794368 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3e372997-1846-4bb4-9e88-5cb7d88bd30b-var-log-ovn\") pod \"ovn-controller-b65n4-config-t7qcz\" (UID: \"3e372997-1846-4bb4-9e88-5cb7d88bd30b\") " pod="openstack/ovn-controller-b65n4-config-t7qcz" Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.794409 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3e372997-1846-4bb4-9e88-5cb7d88bd30b-var-run\") pod \"ovn-controller-b65n4-config-t7qcz\" (UID: \"3e372997-1846-4bb4-9e88-5cb7d88bd30b\") " pod="openstack/ovn-controller-b65n4-config-t7qcz" Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.794863 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3e372997-1846-4bb4-9e88-5cb7d88bd30b-additional-scripts\") pod \"ovn-controller-b65n4-config-t7qcz\" (UID: \"3e372997-1846-4bb4-9e88-5cb7d88bd30b\") " pod="openstack/ovn-controller-b65n4-config-t7qcz" Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.796391 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e372997-1846-4bb4-9e88-5cb7d88bd30b-scripts\") pod \"ovn-controller-b65n4-config-t7qcz\" (UID: \"3e372997-1846-4bb4-9e88-5cb7d88bd30b\") " pod="openstack/ovn-controller-b65n4-config-t7qcz" Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.810191 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqkxc\" (UniqueName: \"kubernetes.io/projected/3e372997-1846-4bb4-9e88-5cb7d88bd30b-kube-api-access-zqkxc\") pod \"ovn-controller-b65n4-config-t7qcz\" (UID: \"3e372997-1846-4bb4-9e88-5cb7d88bd30b\") " pod="openstack/ovn-controller-b65n4-config-t7qcz" Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.852735 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 15 05:51:33 crc kubenswrapper[4747]: I1215 05:51:33.943637 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-b65n4-config-t7qcz" Dec 15 05:51:34 crc kubenswrapper[4747]: I1215 05:51:34.178822 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-f2kl2" event={"ID":"6a5299e8-666f-431f-9ecc-5dcc74352e38","Type":"ContainerDied","Data":"5c3f0cfb67f6a159e8ea0a387fe2e9206a3bf9c9e3d58d991e5668561de15056"} Dec 15 05:51:34 crc kubenswrapper[4747]: I1215 05:51:34.179106 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c3f0cfb67f6a159e8ea0a387fe2e9206a3bf9c9e3d58d991e5668561de15056" Dec 15 05:51:34 crc kubenswrapper[4747]: I1215 05:51:34.178953 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-f2kl2" Dec 15 05:51:34 crc kubenswrapper[4747]: I1215 05:51:34.321089 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 15 05:51:34 crc kubenswrapper[4747]: I1215 05:51:34.338256 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-b65n4-config-t7qcz"] Dec 15 05:51:34 crc kubenswrapper[4747]: I1215 05:51:34.653986 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9647818-0dd1-486a-aa18-3e90baa2e679" path="/var/lib/kubelet/pods/e9647818-0dd1-486a-aa18-3e90baa2e679/volumes" Dec 15 05:51:35 crc kubenswrapper[4747]: I1215 05:51:35.187011 4747 generic.go:334] "Generic (PLEG): container finished" podID="3e372997-1846-4bb4-9e88-5cb7d88bd30b" containerID="7e8bc6047523f041b8000fd62cbc52aa9d9a6f104eb0eeaea7fe0c442e8d0a4b" exitCode=0 Dec 15 05:51:35 crc kubenswrapper[4747]: I1215 05:51:35.187089 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-b65n4-config-t7qcz" event={"ID":"3e372997-1846-4bb4-9e88-5cb7d88bd30b","Type":"ContainerDied","Data":"7e8bc6047523f041b8000fd62cbc52aa9d9a6f104eb0eeaea7fe0c442e8d0a4b"} Dec 15 05:51:35 crc kubenswrapper[4747]: I1215 05:51:35.187121 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-b65n4-config-t7qcz" event={"ID":"3e372997-1846-4bb4-9e88-5cb7d88bd30b","Type":"ContainerStarted","Data":"fef181b98191b431e175672f30ccde0b72b61689868e284ae4995ba47010404d"} Dec 15 05:51:35 crc kubenswrapper[4747]: I1215 05:51:35.188431 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"08c6df63-e1b2-4194-9bbe-b07410de16e7","Type":"ContainerStarted","Data":"efe8864efeed7e54fb29bf69671dd466a5d892e74851a70b8a4838d17a52ffb7"} Dec 15 05:51:35 crc kubenswrapper[4747]: I1215 05:51:35.191625 4747 generic.go:334] "Generic (PLEG): container finished" podID="65a53faf-94ad-48f3-b8e0-8642376f89ee" containerID="8e013d9a657de63787a61a2c6aea79b4254a3e35c2c761dd928d98d5ed13bf52" exitCode=0 Dec 15 05:51:35 crc kubenswrapper[4747]: I1215 05:51:35.191655 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"65a53faf-94ad-48f3-b8e0-8642376f89ee","Type":"ContainerDied","Data":"8e013d9a657de63787a61a2c6aea79b4254a3e35c2c761dd928d98d5ed13bf52"} Dec 15 05:51:36 crc kubenswrapper[4747]: I1215 05:51:36.201699 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"08c6df63-e1b2-4194-9bbe-b07410de16e7","Type":"ContainerStarted","Data":"65ca76dc077992f7d9763c2a4b1c636240a0fbebed0e3d6ce32a43c476ff0d21"} Dec 15 05:51:36 crc kubenswrapper[4747]: I1215 05:51:36.202199 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"08c6df63-e1b2-4194-9bbe-b07410de16e7","Type":"ContainerStarted","Data":"1f27f4e0e2d4fc8353969980c9a0450727f715cbc2024d28aef832df1c7b80a0"} Dec 15 05:51:36 crc kubenswrapper[4747]: I1215 05:51:36.204205 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"65a53faf-94ad-48f3-b8e0-8642376f89ee","Type":"ContainerStarted","Data":"ee7a70f0b60728a6b77ed0468330e5d9e58738f0a9b357ea4016c118b3ef4d37"} Dec 15 05:51:36 crc kubenswrapper[4747]: I1215 05:51:36.204526 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:51:36 crc kubenswrapper[4747]: I1215 05:51:36.205948 4747 generic.go:334] "Generic (PLEG): container finished" podID="9bece5e6-b345-4969-a563-81fb3706f8f1" containerID="7e5ab965190fd6a4bb7f0f426cc015cca7ab231dca777f9c018ffe750cf11f57" exitCode=0 Dec 15 05:51:36 crc kubenswrapper[4747]: I1215 05:51:36.205988 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9bece5e6-b345-4969-a563-81fb3706f8f1","Type":"ContainerDied","Data":"7e5ab965190fd6a4bb7f0f426cc015cca7ab231dca777f9c018ffe750cf11f57"} Dec 15 05:51:36 crc kubenswrapper[4747]: I1215 05:51:36.275586 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371960.579206 podStartE2EDuration="1m16.275569615s" podCreationTimestamp="2025-12-15 05:50:20 +0000 UTC" firstStartedPulling="2025-12-15 05:50:21.993843904 +0000 UTC m=+785.690355820" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:51:36.243900386 +0000 UTC m=+859.940412293" watchObservedRunningTime="2025-12-15 05:51:36.275569615 +0000 UTC m=+859.972081532" Dec 15 05:51:36 crc kubenswrapper[4747]: I1215 05:51:36.506003 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-b65n4-config-t7qcz" Dec 15 05:51:36 crc kubenswrapper[4747]: I1215 05:51:36.655420 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3e372997-1846-4bb4-9e88-5cb7d88bd30b-var-run\") pod \"3e372997-1846-4bb4-9e88-5cb7d88bd30b\" (UID: \"3e372997-1846-4bb4-9e88-5cb7d88bd30b\") " Dec 15 05:51:36 crc kubenswrapper[4747]: I1215 05:51:36.655509 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e372997-1846-4bb4-9e88-5cb7d88bd30b-scripts\") pod \"3e372997-1846-4bb4-9e88-5cb7d88bd30b\" (UID: \"3e372997-1846-4bb4-9e88-5cb7d88bd30b\") " Dec 15 05:51:36 crc kubenswrapper[4747]: I1215 05:51:36.655550 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3e372997-1846-4bb4-9e88-5cb7d88bd30b-additional-scripts\") pod \"3e372997-1846-4bb4-9e88-5cb7d88bd30b\" (UID: \"3e372997-1846-4bb4-9e88-5cb7d88bd30b\") " Dec 15 05:51:36 crc kubenswrapper[4747]: I1215 05:51:36.655642 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3e372997-1846-4bb4-9e88-5cb7d88bd30b-var-log-ovn\") pod \"3e372997-1846-4bb4-9e88-5cb7d88bd30b\" (UID: \"3e372997-1846-4bb4-9e88-5cb7d88bd30b\") " Dec 15 05:51:36 crc kubenswrapper[4747]: I1215 05:51:36.655671 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqkxc\" (UniqueName: \"kubernetes.io/projected/3e372997-1846-4bb4-9e88-5cb7d88bd30b-kube-api-access-zqkxc\") pod \"3e372997-1846-4bb4-9e88-5cb7d88bd30b\" (UID: \"3e372997-1846-4bb4-9e88-5cb7d88bd30b\") " Dec 15 05:51:36 crc kubenswrapper[4747]: I1215 05:51:36.655805 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e372997-1846-4bb4-9e88-5cb7d88bd30b-var-run-ovn\") pod \"3e372997-1846-4bb4-9e88-5cb7d88bd30b\" (UID: \"3e372997-1846-4bb4-9e88-5cb7d88bd30b\") " Dec 15 05:51:36 crc kubenswrapper[4747]: I1215 05:51:36.656321 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e372997-1846-4bb4-9e88-5cb7d88bd30b-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "3e372997-1846-4bb4-9e88-5cb7d88bd30b" (UID: "3e372997-1846-4bb4-9e88-5cb7d88bd30b"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 15 05:51:36 crc kubenswrapper[4747]: I1215 05:51:36.656356 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e372997-1846-4bb4-9e88-5cb7d88bd30b-var-run" (OuterVolumeSpecName: "var-run") pod "3e372997-1846-4bb4-9e88-5cb7d88bd30b" (UID: "3e372997-1846-4bb4-9e88-5cb7d88bd30b"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 15 05:51:36 crc kubenswrapper[4747]: I1215 05:51:36.657389 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e372997-1846-4bb4-9e88-5cb7d88bd30b-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "3e372997-1846-4bb4-9e88-5cb7d88bd30b" (UID: "3e372997-1846-4bb4-9e88-5cb7d88bd30b"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 15 05:51:36 crc kubenswrapper[4747]: I1215 05:51:36.658299 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e372997-1846-4bb4-9e88-5cb7d88bd30b-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "3e372997-1846-4bb4-9e88-5cb7d88bd30b" (UID: "3e372997-1846-4bb4-9e88-5cb7d88bd30b"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:51:36 crc kubenswrapper[4747]: I1215 05:51:36.661539 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e372997-1846-4bb4-9e88-5cb7d88bd30b-scripts" (OuterVolumeSpecName: "scripts") pod "3e372997-1846-4bb4-9e88-5cb7d88bd30b" (UID: "3e372997-1846-4bb4-9e88-5cb7d88bd30b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:51:36 crc kubenswrapper[4747]: I1215 05:51:36.664817 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e372997-1846-4bb4-9e88-5cb7d88bd30b-kube-api-access-zqkxc" (OuterVolumeSpecName: "kube-api-access-zqkxc") pod "3e372997-1846-4bb4-9e88-5cb7d88bd30b" (UID: "3e372997-1846-4bb4-9e88-5cb7d88bd30b"). InnerVolumeSpecName "kube-api-access-zqkxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:51:36 crc kubenswrapper[4747]: I1215 05:51:36.758369 4747 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e372997-1846-4bb4-9e88-5cb7d88bd30b-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:36 crc kubenswrapper[4747]: I1215 05:51:36.758404 4747 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3e372997-1846-4bb4-9e88-5cb7d88bd30b-var-run\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:36 crc kubenswrapper[4747]: I1215 05:51:36.758414 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e372997-1846-4bb4-9e88-5cb7d88bd30b-scripts\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:36 crc kubenswrapper[4747]: I1215 05:51:36.758423 4747 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3e372997-1846-4bb4-9e88-5cb7d88bd30b-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:36 crc kubenswrapper[4747]: I1215 05:51:36.758432 4747 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3e372997-1846-4bb4-9e88-5cb7d88bd30b-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:36 crc kubenswrapper[4747]: I1215 05:51:36.758440 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqkxc\" (UniqueName: \"kubernetes.io/projected/3e372997-1846-4bb4-9e88-5cb7d88bd30b-kube-api-access-zqkxc\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:37 crc kubenswrapper[4747]: I1215 05:51:37.218301 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9bece5e6-b345-4969-a563-81fb3706f8f1","Type":"ContainerStarted","Data":"56e02d43c553bcb63c84e7420195e07acf1694be8e3a4f5410fb5bd736df3cbb"} Dec 15 05:51:37 crc kubenswrapper[4747]: I1215 05:51:37.218590 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 15 05:51:37 crc kubenswrapper[4747]: I1215 05:51:37.225429 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"08c6df63-e1b2-4194-9bbe-b07410de16e7","Type":"ContainerStarted","Data":"96eda3ba8779f6e74b680043512741a49018ee0ba3f5bf48e5b77b7992d537d6"} Dec 15 05:51:37 crc kubenswrapper[4747]: I1215 05:51:37.225501 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"08c6df63-e1b2-4194-9bbe-b07410de16e7","Type":"ContainerStarted","Data":"1e82801af66636c6d5e01987f8c19693f51e6297694f6423c7abea75cf253b21"} Dec 15 05:51:37 crc kubenswrapper[4747]: I1215 05:51:37.227833 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-b65n4-config-t7qcz" event={"ID":"3e372997-1846-4bb4-9e88-5cb7d88bd30b","Type":"ContainerDied","Data":"fef181b98191b431e175672f30ccde0b72b61689868e284ae4995ba47010404d"} Dec 15 05:51:37 crc kubenswrapper[4747]: I1215 05:51:37.227889 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fef181b98191b431e175672f30ccde0b72b61689868e284ae4995ba47010404d" Dec 15 05:51:37 crc kubenswrapper[4747]: I1215 05:51:37.227917 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-b65n4-config-t7qcz" Dec 15 05:51:37 crc kubenswrapper[4747]: I1215 05:51:37.247590 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.702089582 podStartE2EDuration="1m18.247571348s" podCreationTimestamp="2025-12-15 05:50:19 +0000 UTC" firstStartedPulling="2025-12-15 05:50:21.733093786 +0000 UTC m=+785.429605704" lastFinishedPulling="2025-12-15 05:51:02.278575553 +0000 UTC m=+825.975087470" observedRunningTime="2025-12-15 05:51:37.239969621 +0000 UTC m=+860.936481539" watchObservedRunningTime="2025-12-15 05:51:37.247571348 +0000 UTC m=+860.944083265" Dec 15 05:51:37 crc kubenswrapper[4747]: I1215 05:51:37.588167 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-b65n4-config-t7qcz"] Dec 15 05:51:37 crc kubenswrapper[4747]: I1215 05:51:37.599620 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-b65n4-config-t7qcz"] Dec 15 05:51:38 crc kubenswrapper[4747]: I1215 05:51:38.649366 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e372997-1846-4bb4-9e88-5cb7d88bd30b" path="/var/lib/kubelet/pods/3e372997-1846-4bb4-9e88-5cb7d88bd30b/volumes" Dec 15 05:51:39 crc kubenswrapper[4747]: I1215 05:51:39.263725 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"08c6df63-e1b2-4194-9bbe-b07410de16e7","Type":"ContainerStarted","Data":"4d7d076a3d3769f2faa5d3c0a8888507b7c8d06835e104464d57bc5647279ad6"} Dec 15 05:51:39 crc kubenswrapper[4747]: I1215 05:51:39.264082 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"08c6df63-e1b2-4194-9bbe-b07410de16e7","Type":"ContainerStarted","Data":"8fac21eebe939785b5cdd32699e9926f3f53807877be7503d5578a6de2d821ac"} Dec 15 05:51:39 crc kubenswrapper[4747]: I1215 05:51:39.264096 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"08c6df63-e1b2-4194-9bbe-b07410de16e7","Type":"ContainerStarted","Data":"9dea2cbfe1aa8dd82c12d741b30f76163d4ba6cfb2a56a96ee9faa0519b04680"} Dec 15 05:51:39 crc kubenswrapper[4747]: I1215 05:51:39.264107 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"08c6df63-e1b2-4194-9bbe-b07410de16e7","Type":"ContainerStarted","Data":"6c331703488a9b5c9a4209a5c115a6e145b2f5c32fdda5450ae6dea5d32308df"} Dec 15 05:51:40 crc kubenswrapper[4747]: I1215 05:51:40.569237 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-b65n4" Dec 15 05:51:41 crc kubenswrapper[4747]: I1215 05:51:41.290744 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"08c6df63-e1b2-4194-9bbe-b07410de16e7","Type":"ContainerStarted","Data":"9f74672d8d3c3bf17d7804d1a7b8c93976f68e450afdeaf475f5779565099334"} Dec 15 05:51:41 crc kubenswrapper[4747]: I1215 05:51:41.291042 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"08c6df63-e1b2-4194-9bbe-b07410de16e7","Type":"ContainerStarted","Data":"fff2b778b3fdb89af0e56248019725e8139652fb3ca37ceb0e712aaa35cbb8de"} Dec 15 05:51:41 crc kubenswrapper[4747]: I1215 05:51:41.291055 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"08c6df63-e1b2-4194-9bbe-b07410de16e7","Type":"ContainerStarted","Data":"cf8a43afd3c89e333891eea8c9b1a157a4799e79246e857a00cd63ac81ec71dc"} Dec 15 05:51:41 crc kubenswrapper[4747]: I1215 05:51:41.291064 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"08c6df63-e1b2-4194-9bbe-b07410de16e7","Type":"ContainerStarted","Data":"3ea77568ed9ab119f65c3d4a40cb4455807b10c803b61f15b3b681b5ceacadaf"} Dec 15 05:51:41 crc kubenswrapper[4747]: I1215 05:51:41.291072 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"08c6df63-e1b2-4194-9bbe-b07410de16e7","Type":"ContainerStarted","Data":"7a3953e41c74b6579c020e795e9b95f4ac8863d3bab47d7b0f44687596cd365f"} Dec 15 05:51:41 crc kubenswrapper[4747]: I1215 05:51:41.291080 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"08c6df63-e1b2-4194-9bbe-b07410de16e7","Type":"ContainerStarted","Data":"ec912abd53c01a8cbd3025a110440c9054ef0379eeb89af19bb07f14ad64f969"} Dec 15 05:51:42 crc kubenswrapper[4747]: I1215 05:51:42.312460 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"08c6df63-e1b2-4194-9bbe-b07410de16e7","Type":"ContainerStarted","Data":"5c7d9e10b0682e4d365c7c0fc296ac8fe7b7f080b8aa5fd78bada639bcf3b656"} Dec 15 05:51:42 crc kubenswrapper[4747]: I1215 05:51:42.349536 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=20.54564732 podStartE2EDuration="26.349517237s" podCreationTimestamp="2025-12-15 05:51:16 +0000 UTC" firstStartedPulling="2025-12-15 05:51:34.326386668 +0000 UTC m=+858.022898584" lastFinishedPulling="2025-12-15 05:51:40.130256585 +0000 UTC m=+863.826768501" observedRunningTime="2025-12-15 05:51:42.348016135 +0000 UTC m=+866.044528062" watchObservedRunningTime="2025-12-15 05:51:42.349517237 +0000 UTC m=+866.046029154" Dec 15 05:51:42 crc kubenswrapper[4747]: I1215 05:51:42.618594 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-667974f6c9-qrzph"] Dec 15 05:51:42 crc kubenswrapper[4747]: E1215 05:51:42.618918 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e372997-1846-4bb4-9e88-5cb7d88bd30b" containerName="ovn-config" Dec 15 05:51:42 crc kubenswrapper[4747]: I1215 05:51:42.618954 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e372997-1846-4bb4-9e88-5cb7d88bd30b" containerName="ovn-config" Dec 15 05:51:42 crc kubenswrapper[4747]: I1215 05:51:42.619125 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e372997-1846-4bb4-9e88-5cb7d88bd30b" containerName="ovn-config" Dec 15 05:51:42 crc kubenswrapper[4747]: I1215 05:51:42.619910 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-667974f6c9-qrzph" Dec 15 05:51:42 crc kubenswrapper[4747]: I1215 05:51:42.621736 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Dec 15 05:51:42 crc kubenswrapper[4747]: I1215 05:51:42.641822 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-667974f6c9-qrzph"] Dec 15 05:51:42 crc kubenswrapper[4747]: I1215 05:51:42.781178 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/983e9d03-796d-4072-adad-cfe1797ae364-config\") pod \"dnsmasq-dns-667974f6c9-qrzph\" (UID: \"983e9d03-796d-4072-adad-cfe1797ae364\") " pod="openstack/dnsmasq-dns-667974f6c9-qrzph" Dec 15 05:51:42 crc kubenswrapper[4747]: I1215 05:51:42.781262 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/983e9d03-796d-4072-adad-cfe1797ae364-dns-swift-storage-0\") pod \"dnsmasq-dns-667974f6c9-qrzph\" (UID: \"983e9d03-796d-4072-adad-cfe1797ae364\") " pod="openstack/dnsmasq-dns-667974f6c9-qrzph" Dec 15 05:51:42 crc kubenswrapper[4747]: I1215 05:51:42.781375 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/983e9d03-796d-4072-adad-cfe1797ae364-ovsdbserver-nb\") pod \"dnsmasq-dns-667974f6c9-qrzph\" (UID: \"983e9d03-796d-4072-adad-cfe1797ae364\") " pod="openstack/dnsmasq-dns-667974f6c9-qrzph" Dec 15 05:51:42 crc kubenswrapper[4747]: I1215 05:51:42.781452 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/983e9d03-796d-4072-adad-cfe1797ae364-dns-svc\") pod \"dnsmasq-dns-667974f6c9-qrzph\" (UID: \"983e9d03-796d-4072-adad-cfe1797ae364\") " pod="openstack/dnsmasq-dns-667974f6c9-qrzph" Dec 15 05:51:42 crc kubenswrapper[4747]: I1215 05:51:42.781503 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/983e9d03-796d-4072-adad-cfe1797ae364-ovsdbserver-sb\") pod \"dnsmasq-dns-667974f6c9-qrzph\" (UID: \"983e9d03-796d-4072-adad-cfe1797ae364\") " pod="openstack/dnsmasq-dns-667974f6c9-qrzph" Dec 15 05:51:42 crc kubenswrapper[4747]: I1215 05:51:42.781697 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59mgf\" (UniqueName: \"kubernetes.io/projected/983e9d03-796d-4072-adad-cfe1797ae364-kube-api-access-59mgf\") pod \"dnsmasq-dns-667974f6c9-qrzph\" (UID: \"983e9d03-796d-4072-adad-cfe1797ae364\") " pod="openstack/dnsmasq-dns-667974f6c9-qrzph" Dec 15 05:51:42 crc kubenswrapper[4747]: I1215 05:51:42.883443 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/983e9d03-796d-4072-adad-cfe1797ae364-dns-svc\") pod \"dnsmasq-dns-667974f6c9-qrzph\" (UID: \"983e9d03-796d-4072-adad-cfe1797ae364\") " pod="openstack/dnsmasq-dns-667974f6c9-qrzph" Dec 15 05:51:42 crc kubenswrapper[4747]: I1215 05:51:42.883504 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/983e9d03-796d-4072-adad-cfe1797ae364-ovsdbserver-sb\") pod \"dnsmasq-dns-667974f6c9-qrzph\" (UID: \"983e9d03-796d-4072-adad-cfe1797ae364\") " pod="openstack/dnsmasq-dns-667974f6c9-qrzph" Dec 15 05:51:42 crc kubenswrapper[4747]: I1215 05:51:42.883667 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59mgf\" (UniqueName: \"kubernetes.io/projected/983e9d03-796d-4072-adad-cfe1797ae364-kube-api-access-59mgf\") pod \"dnsmasq-dns-667974f6c9-qrzph\" (UID: \"983e9d03-796d-4072-adad-cfe1797ae364\") " pod="openstack/dnsmasq-dns-667974f6c9-qrzph" Dec 15 05:51:42 crc kubenswrapper[4747]: I1215 05:51:42.883803 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/983e9d03-796d-4072-adad-cfe1797ae364-config\") pod \"dnsmasq-dns-667974f6c9-qrzph\" (UID: \"983e9d03-796d-4072-adad-cfe1797ae364\") " pod="openstack/dnsmasq-dns-667974f6c9-qrzph" Dec 15 05:51:42 crc kubenswrapper[4747]: I1215 05:51:42.883875 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/983e9d03-796d-4072-adad-cfe1797ae364-dns-swift-storage-0\") pod \"dnsmasq-dns-667974f6c9-qrzph\" (UID: \"983e9d03-796d-4072-adad-cfe1797ae364\") " pod="openstack/dnsmasq-dns-667974f6c9-qrzph" Dec 15 05:51:42 crc kubenswrapper[4747]: I1215 05:51:42.883904 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/983e9d03-796d-4072-adad-cfe1797ae364-ovsdbserver-nb\") pod \"dnsmasq-dns-667974f6c9-qrzph\" (UID: \"983e9d03-796d-4072-adad-cfe1797ae364\") " pod="openstack/dnsmasq-dns-667974f6c9-qrzph" Dec 15 05:51:42 crc kubenswrapper[4747]: I1215 05:51:42.884912 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/983e9d03-796d-4072-adad-cfe1797ae364-ovsdbserver-nb\") pod \"dnsmasq-dns-667974f6c9-qrzph\" (UID: \"983e9d03-796d-4072-adad-cfe1797ae364\") " pod="openstack/dnsmasq-dns-667974f6c9-qrzph" Dec 15 05:51:42 crc kubenswrapper[4747]: I1215 05:51:42.885518 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/983e9d03-796d-4072-adad-cfe1797ae364-dns-svc\") pod \"dnsmasq-dns-667974f6c9-qrzph\" (UID: \"983e9d03-796d-4072-adad-cfe1797ae364\") " pod="openstack/dnsmasq-dns-667974f6c9-qrzph" Dec 15 05:51:42 crc kubenswrapper[4747]: I1215 05:51:42.886135 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/983e9d03-796d-4072-adad-cfe1797ae364-ovsdbserver-sb\") pod \"dnsmasq-dns-667974f6c9-qrzph\" (UID: \"983e9d03-796d-4072-adad-cfe1797ae364\") " pod="openstack/dnsmasq-dns-667974f6c9-qrzph" Dec 15 05:51:42 crc kubenswrapper[4747]: I1215 05:51:42.887036 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/983e9d03-796d-4072-adad-cfe1797ae364-config\") pod \"dnsmasq-dns-667974f6c9-qrzph\" (UID: \"983e9d03-796d-4072-adad-cfe1797ae364\") " pod="openstack/dnsmasq-dns-667974f6c9-qrzph" Dec 15 05:51:42 crc kubenswrapper[4747]: I1215 05:51:42.887576 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/983e9d03-796d-4072-adad-cfe1797ae364-dns-swift-storage-0\") pod \"dnsmasq-dns-667974f6c9-qrzph\" (UID: \"983e9d03-796d-4072-adad-cfe1797ae364\") " pod="openstack/dnsmasq-dns-667974f6c9-qrzph" Dec 15 05:51:42 crc kubenswrapper[4747]: I1215 05:51:42.909760 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59mgf\" (UniqueName: \"kubernetes.io/projected/983e9d03-796d-4072-adad-cfe1797ae364-kube-api-access-59mgf\") pod \"dnsmasq-dns-667974f6c9-qrzph\" (UID: \"983e9d03-796d-4072-adad-cfe1797ae364\") " pod="openstack/dnsmasq-dns-667974f6c9-qrzph" Dec 15 05:51:42 crc kubenswrapper[4747]: I1215 05:51:42.936439 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-667974f6c9-qrzph" Dec 15 05:51:48 crc kubenswrapper[4747]: I1215 05:51:48.178416 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-667974f6c9-qrzph"] Dec 15 05:51:48 crc kubenswrapper[4747]: I1215 05:51:48.398333 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bw8hg" event={"ID":"fd62cda8-66b6-4fe4-976d-0723d296a262","Type":"ContainerStarted","Data":"250206dd7cd405d970c21eb1cabe2a6085f3f2aa44cdcaf50f8b6912936d924b"} Dec 15 05:51:48 crc kubenswrapper[4747]: I1215 05:51:48.400984 4747 generic.go:334] "Generic (PLEG): container finished" podID="983e9d03-796d-4072-adad-cfe1797ae364" containerID="27d48a4471e86b4b47005d8d6711b0505780863300e2028f5966ec7f211e5d6d" exitCode=0 Dec 15 05:51:48 crc kubenswrapper[4747]: I1215 05:51:48.401024 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-667974f6c9-qrzph" event={"ID":"983e9d03-796d-4072-adad-cfe1797ae364","Type":"ContainerDied","Data":"27d48a4471e86b4b47005d8d6711b0505780863300e2028f5966ec7f211e5d6d"} Dec 15 05:51:48 crc kubenswrapper[4747]: I1215 05:51:48.401044 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-667974f6c9-qrzph" event={"ID":"983e9d03-796d-4072-adad-cfe1797ae364","Type":"ContainerStarted","Data":"a54c0b33f1c6be07f9a726c1753efe7bc0931653cb0e4e858a22f0541c194635"} Dec 15 05:51:48 crc kubenswrapper[4747]: I1215 05:51:48.418700 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-bw8hg" podStartSLOduration=1.8599294309999999 podStartE2EDuration="18.418683088s" podCreationTimestamp="2025-12-15 05:51:30 +0000 UTC" firstStartedPulling="2025-12-15 05:51:31.259203088 +0000 UTC m=+854.955715005" lastFinishedPulling="2025-12-15 05:51:47.817956745 +0000 UTC m=+871.514468662" observedRunningTime="2025-12-15 05:51:48.41288355 +0000 UTC m=+872.109395468" watchObservedRunningTime="2025-12-15 05:51:48.418683088 +0000 UTC m=+872.115195005" Dec 15 05:51:49 crc kubenswrapper[4747]: I1215 05:51:49.411985 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-667974f6c9-qrzph" event={"ID":"983e9d03-796d-4072-adad-cfe1797ae364","Type":"ContainerStarted","Data":"f01c886861b4c40ddd45d01696eee29a56f55472df45c6105920abbc84dbb844"} Dec 15 05:51:49 crc kubenswrapper[4747]: I1215 05:51:49.434401 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-667974f6c9-qrzph" podStartSLOduration=7.434373262 podStartE2EDuration="7.434373262s" podCreationTimestamp="2025-12-15 05:51:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:51:49.430733928 +0000 UTC m=+873.127245845" watchObservedRunningTime="2025-12-15 05:51:49.434373262 +0000 UTC m=+873.130885179" Dec 15 05:51:50 crc kubenswrapper[4747]: I1215 05:51:50.418669 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-667974f6c9-qrzph" Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.249210 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.493984 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.605045 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-8851-account-create-update-svz5f"] Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.606946 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8851-account-create-update-svz5f" Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.609512 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.614009 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-mnk7n"] Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.616388 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-mnk7n" Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.619206 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-8851-account-create-update-svz5f"] Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.623717 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-mnk7n"] Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.679506 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33c060b9-802c-43ba-8020-2bf8afda93ce-operator-scripts\") pod \"cinder-8851-account-create-update-svz5f\" (UID: \"33c060b9-802c-43ba-8020-2bf8afda93ce\") " pod="openstack/cinder-8851-account-create-update-svz5f" Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.679617 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-692mz\" (UniqueName: \"kubernetes.io/projected/33c060b9-802c-43ba-8020-2bf8afda93ce-kube-api-access-692mz\") pod \"cinder-8851-account-create-update-svz5f\" (UID: \"33c060b9-802c-43ba-8020-2bf8afda93ce\") " pod="openstack/cinder-8851-account-create-update-svz5f" Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.679705 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24e3063e-4cb4-4695-9f2a-0a26592ec3cf-operator-scripts\") pod \"cinder-db-create-mnk7n\" (UID: \"24e3063e-4cb4-4695-9f2a-0a26592ec3cf\") " pod="openstack/cinder-db-create-mnk7n" Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.679754 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbxbw\" (UniqueName: \"kubernetes.io/projected/24e3063e-4cb4-4695-9f2a-0a26592ec3cf-kube-api-access-gbxbw\") pod \"cinder-db-create-mnk7n\" (UID: \"24e3063e-4cb4-4695-9f2a-0a26592ec3cf\") " pod="openstack/cinder-db-create-mnk7n" Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.695047 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-jpfz6"] Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.696300 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jpfz6" Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.709459 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-492b-account-create-update-cc8pp"] Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.710419 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-492b-account-create-update-cc8pp" Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.714764 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.715486 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-jpfz6"] Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.719697 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-492b-account-create-update-cc8pp"] Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.781790 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24e3063e-4cb4-4695-9f2a-0a26592ec3cf-operator-scripts\") pod \"cinder-db-create-mnk7n\" (UID: \"24e3063e-4cb4-4695-9f2a-0a26592ec3cf\") " pod="openstack/cinder-db-create-mnk7n" Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.781883 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbxbw\" (UniqueName: \"kubernetes.io/projected/24e3063e-4cb4-4695-9f2a-0a26592ec3cf-kube-api-access-gbxbw\") pod \"cinder-db-create-mnk7n\" (UID: \"24e3063e-4cb4-4695-9f2a-0a26592ec3cf\") " pod="openstack/cinder-db-create-mnk7n" Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.781985 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4aa0c3cd-1263-4dc6-9d30-b58efa93e393-operator-scripts\") pod \"barbican-db-create-jpfz6\" (UID: \"4aa0c3cd-1263-4dc6-9d30-b58efa93e393\") " pod="openstack/barbican-db-create-jpfz6" Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.782037 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33c060b9-802c-43ba-8020-2bf8afda93ce-operator-scripts\") pod \"cinder-8851-account-create-update-svz5f\" (UID: \"33c060b9-802c-43ba-8020-2bf8afda93ce\") " pod="openstack/cinder-8851-account-create-update-svz5f" Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.782072 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32d62287-c73f-47e1-9e64-eb23eaf98dc0-operator-scripts\") pod \"barbican-492b-account-create-update-cc8pp\" (UID: \"32d62287-c73f-47e1-9e64-eb23eaf98dc0\") " pod="openstack/barbican-492b-account-create-update-cc8pp" Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.782097 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml7s8\" (UniqueName: \"kubernetes.io/projected/32d62287-c73f-47e1-9e64-eb23eaf98dc0-kube-api-access-ml7s8\") pod \"barbican-492b-account-create-update-cc8pp\" (UID: \"32d62287-c73f-47e1-9e64-eb23eaf98dc0\") " pod="openstack/barbican-492b-account-create-update-cc8pp" Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.782144 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd4fc\" (UniqueName: \"kubernetes.io/projected/4aa0c3cd-1263-4dc6-9d30-b58efa93e393-kube-api-access-kd4fc\") pod \"barbican-db-create-jpfz6\" (UID: \"4aa0c3cd-1263-4dc6-9d30-b58efa93e393\") " pod="openstack/barbican-db-create-jpfz6" Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.782180 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-692mz\" (UniqueName: \"kubernetes.io/projected/33c060b9-802c-43ba-8020-2bf8afda93ce-kube-api-access-692mz\") pod \"cinder-8851-account-create-update-svz5f\" (UID: \"33c060b9-802c-43ba-8020-2bf8afda93ce\") " pod="openstack/cinder-8851-account-create-update-svz5f" Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.782790 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24e3063e-4cb4-4695-9f2a-0a26592ec3cf-operator-scripts\") pod \"cinder-db-create-mnk7n\" (UID: \"24e3063e-4cb4-4695-9f2a-0a26592ec3cf\") " pod="openstack/cinder-db-create-mnk7n" Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.782798 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33c060b9-802c-43ba-8020-2bf8afda93ce-operator-scripts\") pod \"cinder-8851-account-create-update-svz5f\" (UID: \"33c060b9-802c-43ba-8020-2bf8afda93ce\") " pod="openstack/cinder-8851-account-create-update-svz5f" Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.797903 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbxbw\" (UniqueName: \"kubernetes.io/projected/24e3063e-4cb4-4695-9f2a-0a26592ec3cf-kube-api-access-gbxbw\") pod \"cinder-db-create-mnk7n\" (UID: \"24e3063e-4cb4-4695-9f2a-0a26592ec3cf\") " pod="openstack/cinder-db-create-mnk7n" Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.800636 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-692mz\" (UniqueName: \"kubernetes.io/projected/33c060b9-802c-43ba-8020-2bf8afda93ce-kube-api-access-692mz\") pod \"cinder-8851-account-create-update-svz5f\" (UID: \"33c060b9-802c-43ba-8020-2bf8afda93ce\") " pod="openstack/cinder-8851-account-create-update-svz5f" Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.837566 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-xf272"] Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.838598 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-xf272" Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.842450 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.842470 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-x9mhm" Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.844817 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.849236 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-xf272"] Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.851096 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.883865 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd4fc\" (UniqueName: \"kubernetes.io/projected/4aa0c3cd-1263-4dc6-9d30-b58efa93e393-kube-api-access-kd4fc\") pod \"barbican-db-create-jpfz6\" (UID: \"4aa0c3cd-1263-4dc6-9d30-b58efa93e393\") " pod="openstack/barbican-db-create-jpfz6" Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.884115 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4aa0c3cd-1263-4dc6-9d30-b58efa93e393-operator-scripts\") pod \"barbican-db-create-jpfz6\" (UID: \"4aa0c3cd-1263-4dc6-9d30-b58efa93e393\") " pod="openstack/barbican-db-create-jpfz6" Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.884240 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32d62287-c73f-47e1-9e64-eb23eaf98dc0-operator-scripts\") pod \"barbican-492b-account-create-update-cc8pp\" (UID: \"32d62287-c73f-47e1-9e64-eb23eaf98dc0\") " pod="openstack/barbican-492b-account-create-update-cc8pp" Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.884334 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml7s8\" (UniqueName: \"kubernetes.io/projected/32d62287-c73f-47e1-9e64-eb23eaf98dc0-kube-api-access-ml7s8\") pod \"barbican-492b-account-create-update-cc8pp\" (UID: \"32d62287-c73f-47e1-9e64-eb23eaf98dc0\") " pod="openstack/barbican-492b-account-create-update-cc8pp" Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.884861 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4aa0c3cd-1263-4dc6-9d30-b58efa93e393-operator-scripts\") pod \"barbican-db-create-jpfz6\" (UID: \"4aa0c3cd-1263-4dc6-9d30-b58efa93e393\") " pod="openstack/barbican-db-create-jpfz6" Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.885028 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32d62287-c73f-47e1-9e64-eb23eaf98dc0-operator-scripts\") pod \"barbican-492b-account-create-update-cc8pp\" (UID: \"32d62287-c73f-47e1-9e64-eb23eaf98dc0\") " pod="openstack/barbican-492b-account-create-update-cc8pp" Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.913449 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml7s8\" (UniqueName: \"kubernetes.io/projected/32d62287-c73f-47e1-9e64-eb23eaf98dc0-kube-api-access-ml7s8\") pod \"barbican-492b-account-create-update-cc8pp\" (UID: \"32d62287-c73f-47e1-9e64-eb23eaf98dc0\") " pod="openstack/barbican-492b-account-create-update-cc8pp" Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.915183 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd4fc\" (UniqueName: \"kubernetes.io/projected/4aa0c3cd-1263-4dc6-9d30-b58efa93e393-kube-api-access-kd4fc\") pod \"barbican-db-create-jpfz6\" (UID: \"4aa0c3cd-1263-4dc6-9d30-b58efa93e393\") " pod="openstack/barbican-db-create-jpfz6" Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.926734 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-05ec-account-create-update-dw6xr"] Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.932093 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-05ec-account-create-update-dw6xr" Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.933256 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8851-account-create-update-svz5f" Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.938901 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-mnk7n" Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.939022 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.956678 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-05ec-account-create-update-dw6xr"] Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.986001 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25a4a3fa-f57f-41f5-9f10-664cf17f38c1-config-data\") pod \"keystone-db-sync-xf272\" (UID: \"25a4a3fa-f57f-41f5-9f10-664cf17f38c1\") " pod="openstack/keystone-db-sync-xf272" Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.986049 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2v24\" (UniqueName: \"kubernetes.io/projected/25a4a3fa-f57f-41f5-9f10-664cf17f38c1-kube-api-access-k2v24\") pod \"keystone-db-sync-xf272\" (UID: \"25a4a3fa-f57f-41f5-9f10-664cf17f38c1\") " pod="openstack/keystone-db-sync-xf272" Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.986118 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/961c6dcd-1a16-4c9c-90f4-1ec4325e3512-operator-scripts\") pod \"neutron-05ec-account-create-update-dw6xr\" (UID: \"961c6dcd-1a16-4c9c-90f4-1ec4325e3512\") " pod="openstack/neutron-05ec-account-create-update-dw6xr" Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.986207 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25a4a3fa-f57f-41f5-9f10-664cf17f38c1-combined-ca-bundle\") pod \"keystone-db-sync-xf272\" (UID: \"25a4a3fa-f57f-41f5-9f10-664cf17f38c1\") " pod="openstack/keystone-db-sync-xf272" Dec 15 05:51:51 crc kubenswrapper[4747]: I1215 05:51:51.986244 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4bzp\" (UniqueName: \"kubernetes.io/projected/961c6dcd-1a16-4c9c-90f4-1ec4325e3512-kube-api-access-s4bzp\") pod \"neutron-05ec-account-create-update-dw6xr\" (UID: \"961c6dcd-1a16-4c9c-90f4-1ec4325e3512\") " pod="openstack/neutron-05ec-account-create-update-dw6xr" Dec 15 05:51:52 crc kubenswrapper[4747]: I1215 05:51:52.012108 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jpfz6" Dec 15 05:51:52 crc kubenswrapper[4747]: I1215 05:51:52.016528 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-4kqfj"] Dec 15 05:51:52 crc kubenswrapper[4747]: I1215 05:51:52.025363 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-492b-account-create-update-cc8pp" Dec 15 05:51:52 crc kubenswrapper[4747]: I1215 05:51:52.029346 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4kqfj" Dec 15 05:51:52 crc kubenswrapper[4747]: I1215 05:51:52.036589 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-4kqfj"] Dec 15 05:51:52 crc kubenswrapper[4747]: I1215 05:51:52.091083 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4t87\" (UniqueName: \"kubernetes.io/projected/f9be93f4-5113-41ec-9604-4142d479155d-kube-api-access-v4t87\") pod \"neutron-db-create-4kqfj\" (UID: \"f9be93f4-5113-41ec-9604-4142d479155d\") " pod="openstack/neutron-db-create-4kqfj" Dec 15 05:51:52 crc kubenswrapper[4747]: I1215 05:51:52.091190 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25a4a3fa-f57f-41f5-9f10-664cf17f38c1-config-data\") pod \"keystone-db-sync-xf272\" (UID: \"25a4a3fa-f57f-41f5-9f10-664cf17f38c1\") " pod="openstack/keystone-db-sync-xf272" Dec 15 05:51:52 crc kubenswrapper[4747]: I1215 05:51:52.091224 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2v24\" (UniqueName: \"kubernetes.io/projected/25a4a3fa-f57f-41f5-9f10-664cf17f38c1-kube-api-access-k2v24\") pod \"keystone-db-sync-xf272\" (UID: \"25a4a3fa-f57f-41f5-9f10-664cf17f38c1\") " pod="openstack/keystone-db-sync-xf272" Dec 15 05:51:52 crc kubenswrapper[4747]: I1215 05:51:52.091291 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/961c6dcd-1a16-4c9c-90f4-1ec4325e3512-operator-scripts\") pod \"neutron-05ec-account-create-update-dw6xr\" (UID: \"961c6dcd-1a16-4c9c-90f4-1ec4325e3512\") " pod="openstack/neutron-05ec-account-create-update-dw6xr" Dec 15 05:51:52 crc kubenswrapper[4747]: I1215 05:51:52.091389 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9be93f4-5113-41ec-9604-4142d479155d-operator-scripts\") pod \"neutron-db-create-4kqfj\" (UID: \"f9be93f4-5113-41ec-9604-4142d479155d\") " pod="openstack/neutron-db-create-4kqfj" Dec 15 05:51:52 crc kubenswrapper[4747]: I1215 05:51:52.091413 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25a4a3fa-f57f-41f5-9f10-664cf17f38c1-combined-ca-bundle\") pod \"keystone-db-sync-xf272\" (UID: \"25a4a3fa-f57f-41f5-9f10-664cf17f38c1\") " pod="openstack/keystone-db-sync-xf272" Dec 15 05:51:52 crc kubenswrapper[4747]: I1215 05:51:52.091449 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4bzp\" (UniqueName: \"kubernetes.io/projected/961c6dcd-1a16-4c9c-90f4-1ec4325e3512-kube-api-access-s4bzp\") pod \"neutron-05ec-account-create-update-dw6xr\" (UID: \"961c6dcd-1a16-4c9c-90f4-1ec4325e3512\") " pod="openstack/neutron-05ec-account-create-update-dw6xr" Dec 15 05:51:52 crc kubenswrapper[4747]: I1215 05:51:52.096576 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25a4a3fa-f57f-41f5-9f10-664cf17f38c1-config-data\") pod \"keystone-db-sync-xf272\" (UID: \"25a4a3fa-f57f-41f5-9f10-664cf17f38c1\") " pod="openstack/keystone-db-sync-xf272" Dec 15 05:51:52 crc kubenswrapper[4747]: I1215 05:51:52.097827 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/961c6dcd-1a16-4c9c-90f4-1ec4325e3512-operator-scripts\") pod \"neutron-05ec-account-create-update-dw6xr\" (UID: \"961c6dcd-1a16-4c9c-90f4-1ec4325e3512\") " pod="openstack/neutron-05ec-account-create-update-dw6xr" Dec 15 05:51:52 crc kubenswrapper[4747]: I1215 05:51:52.107426 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25a4a3fa-f57f-41f5-9f10-664cf17f38c1-combined-ca-bundle\") pod \"keystone-db-sync-xf272\" (UID: \"25a4a3fa-f57f-41f5-9f10-664cf17f38c1\") " pod="openstack/keystone-db-sync-xf272" Dec 15 05:51:52 crc kubenswrapper[4747]: I1215 05:51:52.117626 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4bzp\" (UniqueName: \"kubernetes.io/projected/961c6dcd-1a16-4c9c-90f4-1ec4325e3512-kube-api-access-s4bzp\") pod \"neutron-05ec-account-create-update-dw6xr\" (UID: \"961c6dcd-1a16-4c9c-90f4-1ec4325e3512\") " pod="openstack/neutron-05ec-account-create-update-dw6xr" Dec 15 05:51:52 crc kubenswrapper[4747]: I1215 05:51:52.121416 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2v24\" (UniqueName: \"kubernetes.io/projected/25a4a3fa-f57f-41f5-9f10-664cf17f38c1-kube-api-access-k2v24\") pod \"keystone-db-sync-xf272\" (UID: \"25a4a3fa-f57f-41f5-9f10-664cf17f38c1\") " pod="openstack/keystone-db-sync-xf272" Dec 15 05:51:52 crc kubenswrapper[4747]: I1215 05:51:52.176443 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-xf272" Dec 15 05:51:52 crc kubenswrapper[4747]: I1215 05:51:52.193458 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4t87\" (UniqueName: \"kubernetes.io/projected/f9be93f4-5113-41ec-9604-4142d479155d-kube-api-access-v4t87\") pod \"neutron-db-create-4kqfj\" (UID: \"f9be93f4-5113-41ec-9604-4142d479155d\") " pod="openstack/neutron-db-create-4kqfj" Dec 15 05:51:52 crc kubenswrapper[4747]: I1215 05:51:52.193645 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9be93f4-5113-41ec-9604-4142d479155d-operator-scripts\") pod \"neutron-db-create-4kqfj\" (UID: \"f9be93f4-5113-41ec-9604-4142d479155d\") " pod="openstack/neutron-db-create-4kqfj" Dec 15 05:51:52 crc kubenswrapper[4747]: I1215 05:51:52.194653 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9be93f4-5113-41ec-9604-4142d479155d-operator-scripts\") pod \"neutron-db-create-4kqfj\" (UID: \"f9be93f4-5113-41ec-9604-4142d479155d\") " pod="openstack/neutron-db-create-4kqfj" Dec 15 05:51:52 crc kubenswrapper[4747]: I1215 05:51:52.211516 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4t87\" (UniqueName: \"kubernetes.io/projected/f9be93f4-5113-41ec-9604-4142d479155d-kube-api-access-v4t87\") pod \"neutron-db-create-4kqfj\" (UID: \"f9be93f4-5113-41ec-9604-4142d479155d\") " pod="openstack/neutron-db-create-4kqfj" Dec 15 05:51:52 crc kubenswrapper[4747]: I1215 05:51:52.309374 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-05ec-account-create-update-dw6xr" Dec 15 05:51:52 crc kubenswrapper[4747]: I1215 05:51:52.364274 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-8851-account-create-update-svz5f"] Dec 15 05:51:52 crc kubenswrapper[4747]: I1215 05:51:52.368221 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4kqfj" Dec 15 05:51:52 crc kubenswrapper[4747]: W1215 05:51:52.383096 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33c060b9_802c_43ba_8020_2bf8afda93ce.slice/crio-cd7c0e15c15331296d2b1085bc5cec4485ef0c76afaa6cc42aa59b09f98b7c77 WatchSource:0}: Error finding container cd7c0e15c15331296d2b1085bc5cec4485ef0c76afaa6cc42aa59b09f98b7c77: Status 404 returned error can't find the container with id cd7c0e15c15331296d2b1085bc5cec4485ef0c76afaa6cc42aa59b09f98b7c77 Dec 15 05:51:52 crc kubenswrapper[4747]: I1215 05:51:52.437892 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8851-account-create-update-svz5f" event={"ID":"33c060b9-802c-43ba-8020-2bf8afda93ce","Type":"ContainerStarted","Data":"cd7c0e15c15331296d2b1085bc5cec4485ef0c76afaa6cc42aa59b09f98b7c77"} Dec 15 05:51:52 crc kubenswrapper[4747]: I1215 05:51:52.642606 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-05ec-account-create-update-dw6xr"] Dec 15 05:51:52 crc kubenswrapper[4747]: W1215 05:51:52.645895 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod961c6dcd_1a16_4c9c_90f4_1ec4325e3512.slice/crio-15fdbe57bce7fc7d7555f17e4d0176d964be12b2585bd69538912791db5d766e WatchSource:0}: Error finding container 15fdbe57bce7fc7d7555f17e4d0176d964be12b2585bd69538912791db5d766e: Status 404 returned error can't find the container with id 15fdbe57bce7fc7d7555f17e4d0176d964be12b2585bd69538912791db5d766e Dec 15 05:51:52 crc kubenswrapper[4747]: W1215 05:51:52.646238 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24e3063e_4cb4_4695_9f2a_0a26592ec3cf.slice/crio-0527759a1caeff19d5eeeae9941cb1cd971d62b546f26d759238e744fc6ae93b WatchSource:0}: Error finding container 0527759a1caeff19d5eeeae9941cb1cd971d62b546f26d759238e744fc6ae93b: Status 404 returned error can't find the container with id 0527759a1caeff19d5eeeae9941cb1cd971d62b546f26d759238e744fc6ae93b Dec 15 05:51:52 crc kubenswrapper[4747]: W1215 05:51:52.648966 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4aa0c3cd_1263_4dc6_9d30_b58efa93e393.slice/crio-697f121f8a289642bcd2f83598e42d02b413c03e5d750a1f29ca15c4fcd03e86 WatchSource:0}: Error finding container 697f121f8a289642bcd2f83598e42d02b413c03e5d750a1f29ca15c4fcd03e86: Status 404 returned error can't find the container with id 697f121f8a289642bcd2f83598e42d02b413c03e5d750a1f29ca15c4fcd03e86 Dec 15 05:51:52 crc kubenswrapper[4747]: I1215 05:51:52.650940 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-mnk7n"] Dec 15 05:51:52 crc kubenswrapper[4747]: I1215 05:51:52.656776 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-jpfz6"] Dec 15 05:51:53 crc kubenswrapper[4747]: I1215 05:51:52.750854 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-xf272"] Dec 15 05:51:53 crc kubenswrapper[4747]: I1215 05:51:52.758338 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-492b-account-create-update-cc8pp"] Dec 15 05:51:53 crc kubenswrapper[4747]: W1215 05:51:52.764081 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32d62287_c73f_47e1_9e64_eb23eaf98dc0.slice/crio-be9f267aa914e09952e0b097ddcf5ad1f8dd4616a2bcd547ff2c411711aabf62 WatchSource:0}: Error finding container be9f267aa914e09952e0b097ddcf5ad1f8dd4616a2bcd547ff2c411711aabf62: Status 404 returned error can't find the container with id be9f267aa914e09952e0b097ddcf5ad1f8dd4616a2bcd547ff2c411711aabf62 Dec 15 05:51:53 crc kubenswrapper[4747]: W1215 05:51:52.785527 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25a4a3fa_f57f_41f5_9f10_664cf17f38c1.slice/crio-2443b03ed495f03962a0064efb4272fe81696132f7d4b4450cdc5d732b394891 WatchSource:0}: Error finding container 2443b03ed495f03962a0064efb4272fe81696132f7d4b4450cdc5d732b394891: Status 404 returned error can't find the container with id 2443b03ed495f03962a0064efb4272fe81696132f7d4b4450cdc5d732b394891 Dec 15 05:51:53 crc kubenswrapper[4747]: I1215 05:51:52.910953 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-4kqfj"] Dec 15 05:51:53 crc kubenswrapper[4747]: W1215 05:51:52.946734 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9be93f4_5113_41ec_9604_4142d479155d.slice/crio-1318c58768bf2999906aa5cf470dfa9d43017d77b9be9a5aecd583a9ae87a318 WatchSource:0}: Error finding container 1318c58768bf2999906aa5cf470dfa9d43017d77b9be9a5aecd583a9ae87a318: Status 404 returned error can't find the container with id 1318c58768bf2999906aa5cf470dfa9d43017d77b9be9a5aecd583a9ae87a318 Dec 15 05:51:53 crc kubenswrapper[4747]: I1215 05:51:53.449873 4747 generic.go:334] "Generic (PLEG): container finished" podID="4aa0c3cd-1263-4dc6-9d30-b58efa93e393" containerID="c429cd9c2f6921834a087af44b405b1d6f24193ef67089d89e66d13ac1c52963" exitCode=0 Dec 15 05:51:53 crc kubenswrapper[4747]: I1215 05:51:53.450003 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jpfz6" event={"ID":"4aa0c3cd-1263-4dc6-9d30-b58efa93e393","Type":"ContainerDied","Data":"c429cd9c2f6921834a087af44b405b1d6f24193ef67089d89e66d13ac1c52963"} Dec 15 05:51:53 crc kubenswrapper[4747]: I1215 05:51:53.450413 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jpfz6" event={"ID":"4aa0c3cd-1263-4dc6-9d30-b58efa93e393","Type":"ContainerStarted","Data":"697f121f8a289642bcd2f83598e42d02b413c03e5d750a1f29ca15c4fcd03e86"} Dec 15 05:51:53 crc kubenswrapper[4747]: I1215 05:51:53.452470 4747 generic.go:334] "Generic (PLEG): container finished" podID="24e3063e-4cb4-4695-9f2a-0a26592ec3cf" containerID="15db45f295818ff3aa48b7e9ae715dbfad735b614677ee1980e23350be0e36ec" exitCode=0 Dec 15 05:51:53 crc kubenswrapper[4747]: I1215 05:51:53.452533 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-mnk7n" event={"ID":"24e3063e-4cb4-4695-9f2a-0a26592ec3cf","Type":"ContainerDied","Data":"15db45f295818ff3aa48b7e9ae715dbfad735b614677ee1980e23350be0e36ec"} Dec 15 05:51:53 crc kubenswrapper[4747]: I1215 05:51:53.452587 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-mnk7n" event={"ID":"24e3063e-4cb4-4695-9f2a-0a26592ec3cf","Type":"ContainerStarted","Data":"0527759a1caeff19d5eeeae9941cb1cd971d62b546f26d759238e744fc6ae93b"} Dec 15 05:51:53 crc kubenswrapper[4747]: I1215 05:51:53.454279 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-xf272" event={"ID":"25a4a3fa-f57f-41f5-9f10-664cf17f38c1","Type":"ContainerStarted","Data":"2443b03ed495f03962a0064efb4272fe81696132f7d4b4450cdc5d732b394891"} Dec 15 05:51:53 crc kubenswrapper[4747]: I1215 05:51:53.455977 4747 generic.go:334] "Generic (PLEG): container finished" podID="33c060b9-802c-43ba-8020-2bf8afda93ce" containerID="7ed0df32aaf83aafd3c89500c065e6f8affd2c5b6a04990de17351de3f6d45df" exitCode=0 Dec 15 05:51:53 crc kubenswrapper[4747]: I1215 05:51:53.456039 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8851-account-create-update-svz5f" event={"ID":"33c060b9-802c-43ba-8020-2bf8afda93ce","Type":"ContainerDied","Data":"7ed0df32aaf83aafd3c89500c065e6f8affd2c5b6a04990de17351de3f6d45df"} Dec 15 05:51:53 crc kubenswrapper[4747]: I1215 05:51:53.457442 4747 generic.go:334] "Generic (PLEG): container finished" podID="961c6dcd-1a16-4c9c-90f4-1ec4325e3512" containerID="b359c8a60c41a492332b71df95c563453b1a044da6a7d0481963b5f8a5f2a115" exitCode=0 Dec 15 05:51:53 crc kubenswrapper[4747]: I1215 05:51:53.457546 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-05ec-account-create-update-dw6xr" event={"ID":"961c6dcd-1a16-4c9c-90f4-1ec4325e3512","Type":"ContainerDied","Data":"b359c8a60c41a492332b71df95c563453b1a044da6a7d0481963b5f8a5f2a115"} Dec 15 05:51:53 crc kubenswrapper[4747]: I1215 05:51:53.457594 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-05ec-account-create-update-dw6xr" event={"ID":"961c6dcd-1a16-4c9c-90f4-1ec4325e3512","Type":"ContainerStarted","Data":"15fdbe57bce7fc7d7555f17e4d0176d964be12b2585bd69538912791db5d766e"} Dec 15 05:51:53 crc kubenswrapper[4747]: I1215 05:51:53.459711 4747 generic.go:334] "Generic (PLEG): container finished" podID="32d62287-c73f-47e1-9e64-eb23eaf98dc0" containerID="3396993c6639717501fa5ce0308de40abbb2cbc3875ca355a393167f207f2b83" exitCode=0 Dec 15 05:51:53 crc kubenswrapper[4747]: I1215 05:51:53.459803 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-492b-account-create-update-cc8pp" event={"ID":"32d62287-c73f-47e1-9e64-eb23eaf98dc0","Type":"ContainerDied","Data":"3396993c6639717501fa5ce0308de40abbb2cbc3875ca355a393167f207f2b83"} Dec 15 05:51:53 crc kubenswrapper[4747]: I1215 05:51:53.459851 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-492b-account-create-update-cc8pp" event={"ID":"32d62287-c73f-47e1-9e64-eb23eaf98dc0","Type":"ContainerStarted","Data":"be9f267aa914e09952e0b097ddcf5ad1f8dd4616a2bcd547ff2c411711aabf62"} Dec 15 05:51:53 crc kubenswrapper[4747]: I1215 05:51:53.461855 4747 generic.go:334] "Generic (PLEG): container finished" podID="f9be93f4-5113-41ec-9604-4142d479155d" containerID="2e51d143beb645843dc4fe3c8412f75c63e0096896b9a3b3a1b211572db41418" exitCode=0 Dec 15 05:51:53 crc kubenswrapper[4747]: I1215 05:51:53.461901 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-4kqfj" event={"ID":"f9be93f4-5113-41ec-9604-4142d479155d","Type":"ContainerDied","Data":"2e51d143beb645843dc4fe3c8412f75c63e0096896b9a3b3a1b211572db41418"} Dec 15 05:51:53 crc kubenswrapper[4747]: I1215 05:51:53.461946 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-4kqfj" event={"ID":"f9be93f4-5113-41ec-9604-4142d479155d","Type":"ContainerStarted","Data":"1318c58768bf2999906aa5cf470dfa9d43017d77b9be9a5aecd583a9ae87a318"} Dec 15 05:51:54 crc kubenswrapper[4747]: I1215 05:51:54.478242 4747 generic.go:334] "Generic (PLEG): container finished" podID="fd62cda8-66b6-4fe4-976d-0723d296a262" containerID="250206dd7cd405d970c21eb1cabe2a6085f3f2aa44cdcaf50f8b6912936d924b" exitCode=0 Dec 15 05:51:54 crc kubenswrapper[4747]: I1215 05:51:54.478325 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bw8hg" event={"ID":"fd62cda8-66b6-4fe4-976d-0723d296a262","Type":"ContainerDied","Data":"250206dd7cd405d970c21eb1cabe2a6085f3f2aa44cdcaf50f8b6912936d924b"} Dec 15 05:51:54 crc kubenswrapper[4747]: I1215 05:51:54.848447 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jpfz6" Dec 15 05:51:54 crc kubenswrapper[4747]: I1215 05:51:54.953993 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4aa0c3cd-1263-4dc6-9d30-b58efa93e393-operator-scripts\") pod \"4aa0c3cd-1263-4dc6-9d30-b58efa93e393\" (UID: \"4aa0c3cd-1263-4dc6-9d30-b58efa93e393\") " Dec 15 05:51:54 crc kubenswrapper[4747]: I1215 05:51:54.954343 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kd4fc\" (UniqueName: \"kubernetes.io/projected/4aa0c3cd-1263-4dc6-9d30-b58efa93e393-kube-api-access-kd4fc\") pod \"4aa0c3cd-1263-4dc6-9d30-b58efa93e393\" (UID: \"4aa0c3cd-1263-4dc6-9d30-b58efa93e393\") " Dec 15 05:51:54 crc kubenswrapper[4747]: I1215 05:51:54.954669 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4aa0c3cd-1263-4dc6-9d30-b58efa93e393-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4aa0c3cd-1263-4dc6-9d30-b58efa93e393" (UID: "4aa0c3cd-1263-4dc6-9d30-b58efa93e393"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:51:54 crc kubenswrapper[4747]: I1215 05:51:54.955229 4747 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4aa0c3cd-1263-4dc6-9d30-b58efa93e393-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:54 crc kubenswrapper[4747]: I1215 05:51:54.969063 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4aa0c3cd-1263-4dc6-9d30-b58efa93e393-kube-api-access-kd4fc" (OuterVolumeSpecName: "kube-api-access-kd4fc") pod "4aa0c3cd-1263-4dc6-9d30-b58efa93e393" (UID: "4aa0c3cd-1263-4dc6-9d30-b58efa93e393"). InnerVolumeSpecName "kube-api-access-kd4fc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.024856 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-492b-account-create-update-cc8pp" Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.028705 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-05ec-account-create-update-dw6xr" Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.043674 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-mnk7n" Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.058614 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kd4fc\" (UniqueName: \"kubernetes.io/projected/4aa0c3cd-1263-4dc6-9d30-b58efa93e393-kube-api-access-kd4fc\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.063469 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8851-account-create-update-svz5f" Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.066471 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4kqfj" Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.159653 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4t87\" (UniqueName: \"kubernetes.io/projected/f9be93f4-5113-41ec-9604-4142d479155d-kube-api-access-v4t87\") pod \"f9be93f4-5113-41ec-9604-4142d479155d\" (UID: \"f9be93f4-5113-41ec-9604-4142d479155d\") " Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.159696 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/961c6dcd-1a16-4c9c-90f4-1ec4325e3512-operator-scripts\") pod \"961c6dcd-1a16-4c9c-90f4-1ec4325e3512\" (UID: \"961c6dcd-1a16-4c9c-90f4-1ec4325e3512\") " Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.160454 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/961c6dcd-1a16-4c9c-90f4-1ec4325e3512-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "961c6dcd-1a16-4c9c-90f4-1ec4325e3512" (UID: "961c6dcd-1a16-4c9c-90f4-1ec4325e3512"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.160533 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ml7s8\" (UniqueName: \"kubernetes.io/projected/32d62287-c73f-47e1-9e64-eb23eaf98dc0-kube-api-access-ml7s8\") pod \"32d62287-c73f-47e1-9e64-eb23eaf98dc0\" (UID: \"32d62287-c73f-47e1-9e64-eb23eaf98dc0\") " Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.160604 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbxbw\" (UniqueName: \"kubernetes.io/projected/24e3063e-4cb4-4695-9f2a-0a26592ec3cf-kube-api-access-gbxbw\") pod \"24e3063e-4cb4-4695-9f2a-0a26592ec3cf\" (UID: \"24e3063e-4cb4-4695-9f2a-0a26592ec3cf\") " Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.160677 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9be93f4-5113-41ec-9604-4142d479155d-operator-scripts\") pod \"f9be93f4-5113-41ec-9604-4142d479155d\" (UID: \"f9be93f4-5113-41ec-9604-4142d479155d\") " Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.160709 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24e3063e-4cb4-4695-9f2a-0a26592ec3cf-operator-scripts\") pod \"24e3063e-4cb4-4695-9f2a-0a26592ec3cf\" (UID: \"24e3063e-4cb4-4695-9f2a-0a26592ec3cf\") " Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.160764 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4bzp\" (UniqueName: \"kubernetes.io/projected/961c6dcd-1a16-4c9c-90f4-1ec4325e3512-kube-api-access-s4bzp\") pod \"961c6dcd-1a16-4c9c-90f4-1ec4325e3512\" (UID: \"961c6dcd-1a16-4c9c-90f4-1ec4325e3512\") " Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.160815 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-692mz\" (UniqueName: \"kubernetes.io/projected/33c060b9-802c-43ba-8020-2bf8afda93ce-kube-api-access-692mz\") pod \"33c060b9-802c-43ba-8020-2bf8afda93ce\" (UID: \"33c060b9-802c-43ba-8020-2bf8afda93ce\") " Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.160844 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33c060b9-802c-43ba-8020-2bf8afda93ce-operator-scripts\") pod \"33c060b9-802c-43ba-8020-2bf8afda93ce\" (UID: \"33c060b9-802c-43ba-8020-2bf8afda93ce\") " Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.160865 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32d62287-c73f-47e1-9e64-eb23eaf98dc0-operator-scripts\") pod \"32d62287-c73f-47e1-9e64-eb23eaf98dc0\" (UID: \"32d62287-c73f-47e1-9e64-eb23eaf98dc0\") " Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.161982 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33c060b9-802c-43ba-8020-2bf8afda93ce-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "33c060b9-802c-43ba-8020-2bf8afda93ce" (UID: "33c060b9-802c-43ba-8020-2bf8afda93ce"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.162068 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24e3063e-4cb4-4695-9f2a-0a26592ec3cf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "24e3063e-4cb4-4695-9f2a-0a26592ec3cf" (UID: "24e3063e-4cb4-4695-9f2a-0a26592ec3cf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.162096 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32d62287-c73f-47e1-9e64-eb23eaf98dc0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "32d62287-c73f-47e1-9e64-eb23eaf98dc0" (UID: "32d62287-c73f-47e1-9e64-eb23eaf98dc0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.162217 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9be93f4-5113-41ec-9604-4142d479155d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f9be93f4-5113-41ec-9604-4142d479155d" (UID: "f9be93f4-5113-41ec-9604-4142d479155d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.162340 4747 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9be93f4-5113-41ec-9604-4142d479155d-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.162364 4747 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24e3063e-4cb4-4695-9f2a-0a26592ec3cf-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.162398 4747 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33c060b9-802c-43ba-8020-2bf8afda93ce-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.162408 4747 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32d62287-c73f-47e1-9e64-eb23eaf98dc0-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.162418 4747 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/961c6dcd-1a16-4c9c-90f4-1ec4325e3512-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.164514 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9be93f4-5113-41ec-9604-4142d479155d-kube-api-access-v4t87" (OuterVolumeSpecName: "kube-api-access-v4t87") pod "f9be93f4-5113-41ec-9604-4142d479155d" (UID: "f9be93f4-5113-41ec-9604-4142d479155d"). InnerVolumeSpecName "kube-api-access-v4t87". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.165339 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32d62287-c73f-47e1-9e64-eb23eaf98dc0-kube-api-access-ml7s8" (OuterVolumeSpecName: "kube-api-access-ml7s8") pod "32d62287-c73f-47e1-9e64-eb23eaf98dc0" (UID: "32d62287-c73f-47e1-9e64-eb23eaf98dc0"). InnerVolumeSpecName "kube-api-access-ml7s8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.165375 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24e3063e-4cb4-4695-9f2a-0a26592ec3cf-kube-api-access-gbxbw" (OuterVolumeSpecName: "kube-api-access-gbxbw") pod "24e3063e-4cb4-4695-9f2a-0a26592ec3cf" (UID: "24e3063e-4cb4-4695-9f2a-0a26592ec3cf"). InnerVolumeSpecName "kube-api-access-gbxbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.165783 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33c060b9-802c-43ba-8020-2bf8afda93ce-kube-api-access-692mz" (OuterVolumeSpecName: "kube-api-access-692mz") pod "33c060b9-802c-43ba-8020-2bf8afda93ce" (UID: "33c060b9-802c-43ba-8020-2bf8afda93ce"). InnerVolumeSpecName "kube-api-access-692mz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.166488 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/961c6dcd-1a16-4c9c-90f4-1ec4325e3512-kube-api-access-s4bzp" (OuterVolumeSpecName: "kube-api-access-s4bzp") pod "961c6dcd-1a16-4c9c-90f4-1ec4325e3512" (UID: "961c6dcd-1a16-4c9c-90f4-1ec4325e3512"). InnerVolumeSpecName "kube-api-access-s4bzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.264112 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4t87\" (UniqueName: \"kubernetes.io/projected/f9be93f4-5113-41ec-9604-4142d479155d-kube-api-access-v4t87\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.264148 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ml7s8\" (UniqueName: \"kubernetes.io/projected/32d62287-c73f-47e1-9e64-eb23eaf98dc0-kube-api-access-ml7s8\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.264167 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbxbw\" (UniqueName: \"kubernetes.io/projected/24e3063e-4cb4-4695-9f2a-0a26592ec3cf-kube-api-access-gbxbw\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.264179 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4bzp\" (UniqueName: \"kubernetes.io/projected/961c6dcd-1a16-4c9c-90f4-1ec4325e3512-kube-api-access-s4bzp\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.264191 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-692mz\" (UniqueName: \"kubernetes.io/projected/33c060b9-802c-43ba-8020-2bf8afda93ce-kube-api-access-692mz\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.493992 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8851-account-create-update-svz5f" event={"ID":"33c060b9-802c-43ba-8020-2bf8afda93ce","Type":"ContainerDied","Data":"cd7c0e15c15331296d2b1085bc5cec4485ef0c76afaa6cc42aa59b09f98b7c77"} Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.494030 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8851-account-create-update-svz5f" Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.494058 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd7c0e15c15331296d2b1085bc5cec4485ef0c76afaa6cc42aa59b09f98b7c77" Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.496227 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-05ec-account-create-update-dw6xr" event={"ID":"961c6dcd-1a16-4c9c-90f4-1ec4325e3512","Type":"ContainerDied","Data":"15fdbe57bce7fc7d7555f17e4d0176d964be12b2585bd69538912791db5d766e"} Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.496274 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15fdbe57bce7fc7d7555f17e4d0176d964be12b2585bd69538912791db5d766e" Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.496356 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-05ec-account-create-update-dw6xr" Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.498834 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-492b-account-create-update-cc8pp" event={"ID":"32d62287-c73f-47e1-9e64-eb23eaf98dc0","Type":"ContainerDied","Data":"be9f267aa914e09952e0b097ddcf5ad1f8dd4616a2bcd547ff2c411711aabf62"} Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.498891 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be9f267aa914e09952e0b097ddcf5ad1f8dd4616a2bcd547ff2c411711aabf62" Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.498995 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-492b-account-create-update-cc8pp" Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.510600 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-4kqfj" event={"ID":"f9be93f4-5113-41ec-9604-4142d479155d","Type":"ContainerDied","Data":"1318c58768bf2999906aa5cf470dfa9d43017d77b9be9a5aecd583a9ae87a318"} Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.510663 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1318c58768bf2999906aa5cf470dfa9d43017d77b9be9a5aecd583a9ae87a318" Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.510756 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4kqfj" Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.524210 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jpfz6" event={"ID":"4aa0c3cd-1263-4dc6-9d30-b58efa93e393","Type":"ContainerDied","Data":"697f121f8a289642bcd2f83598e42d02b413c03e5d750a1f29ca15c4fcd03e86"} Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.524254 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="697f121f8a289642bcd2f83598e42d02b413c03e5d750a1f29ca15c4fcd03e86" Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.524307 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jpfz6" Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.526665 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-mnk7n" Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.527113 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-mnk7n" event={"ID":"24e3063e-4cb4-4695-9f2a-0a26592ec3cf","Type":"ContainerDied","Data":"0527759a1caeff19d5eeeae9941cb1cd971d62b546f26d759238e744fc6ae93b"} Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.527182 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0527759a1caeff19d5eeeae9941cb1cd971d62b546f26d759238e744fc6ae93b" Dec 15 05:51:55 crc kubenswrapper[4747]: I1215 05:51:55.952547 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bw8hg" Dec 15 05:51:56 crc kubenswrapper[4747]: I1215 05:51:56.087709 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fd62cda8-66b6-4fe4-976d-0723d296a262-db-sync-config-data\") pod \"fd62cda8-66b6-4fe4-976d-0723d296a262\" (UID: \"fd62cda8-66b6-4fe4-976d-0723d296a262\") " Dec 15 05:51:56 crc kubenswrapper[4747]: I1215 05:51:56.087821 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd62cda8-66b6-4fe4-976d-0723d296a262-combined-ca-bundle\") pod \"fd62cda8-66b6-4fe4-976d-0723d296a262\" (UID: \"fd62cda8-66b6-4fe4-976d-0723d296a262\") " Dec 15 05:51:56 crc kubenswrapper[4747]: I1215 05:51:56.088444 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd62cda8-66b6-4fe4-976d-0723d296a262-config-data\") pod \"fd62cda8-66b6-4fe4-976d-0723d296a262\" (UID: \"fd62cda8-66b6-4fe4-976d-0723d296a262\") " Dec 15 05:51:56 crc kubenswrapper[4747]: I1215 05:51:56.088599 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svj4c\" (UniqueName: \"kubernetes.io/projected/fd62cda8-66b6-4fe4-976d-0723d296a262-kube-api-access-svj4c\") pod \"fd62cda8-66b6-4fe4-976d-0723d296a262\" (UID: \"fd62cda8-66b6-4fe4-976d-0723d296a262\") " Dec 15 05:51:56 crc kubenswrapper[4747]: I1215 05:51:56.092518 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd62cda8-66b6-4fe4-976d-0723d296a262-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "fd62cda8-66b6-4fe4-976d-0723d296a262" (UID: "fd62cda8-66b6-4fe4-976d-0723d296a262"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:51:56 crc kubenswrapper[4747]: I1215 05:51:56.094380 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd62cda8-66b6-4fe4-976d-0723d296a262-kube-api-access-svj4c" (OuterVolumeSpecName: "kube-api-access-svj4c") pod "fd62cda8-66b6-4fe4-976d-0723d296a262" (UID: "fd62cda8-66b6-4fe4-976d-0723d296a262"). InnerVolumeSpecName "kube-api-access-svj4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:51:56 crc kubenswrapper[4747]: I1215 05:51:56.116105 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd62cda8-66b6-4fe4-976d-0723d296a262-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd62cda8-66b6-4fe4-976d-0723d296a262" (UID: "fd62cda8-66b6-4fe4-976d-0723d296a262"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:51:56 crc kubenswrapper[4747]: I1215 05:51:56.122206 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd62cda8-66b6-4fe4-976d-0723d296a262-config-data" (OuterVolumeSpecName: "config-data") pod "fd62cda8-66b6-4fe4-976d-0723d296a262" (UID: "fd62cda8-66b6-4fe4-976d-0723d296a262"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:51:56 crc kubenswrapper[4747]: I1215 05:51:56.191438 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svj4c\" (UniqueName: \"kubernetes.io/projected/fd62cda8-66b6-4fe4-976d-0723d296a262-kube-api-access-svj4c\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:56 crc kubenswrapper[4747]: I1215 05:51:56.191475 4747 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fd62cda8-66b6-4fe4-976d-0723d296a262-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:56 crc kubenswrapper[4747]: I1215 05:51:56.191487 4747 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd62cda8-66b6-4fe4-976d-0723d296a262-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:56 crc kubenswrapper[4747]: I1215 05:51:56.191499 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd62cda8-66b6-4fe4-976d-0723d296a262-config-data\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:56 crc kubenswrapper[4747]: I1215 05:51:56.538437 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bw8hg" event={"ID":"fd62cda8-66b6-4fe4-976d-0723d296a262","Type":"ContainerDied","Data":"435840b0c20f593a3128e526e713de56a7b47d8900f13325f92a05373f46c51a"} Dec 15 05:51:56 crc kubenswrapper[4747]: I1215 05:51:56.538778 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="435840b0c20f593a3128e526e713de56a7b47d8900f13325f92a05373f46c51a" Dec 15 05:51:56 crc kubenswrapper[4747]: I1215 05:51:56.538481 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bw8hg" Dec 15 05:51:56 crc kubenswrapper[4747]: I1215 05:51:56.862395 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-667974f6c9-qrzph"] Dec 15 05:51:56 crc kubenswrapper[4747]: I1215 05:51:56.863012 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-667974f6c9-qrzph" podUID="983e9d03-796d-4072-adad-cfe1797ae364" containerName="dnsmasq-dns" containerID="cri-o://f01c886861b4c40ddd45d01696eee29a56f55472df45c6105920abbc84dbb844" gracePeriod=10 Dec 15 05:51:56 crc kubenswrapper[4747]: I1215 05:51:56.867292 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-667974f6c9-qrzph" Dec 15 05:51:56 crc kubenswrapper[4747]: I1215 05:51:56.917758 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6db75979dc-kxl2c"] Dec 15 05:51:56 crc kubenswrapper[4747]: E1215 05:51:56.918162 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32d62287-c73f-47e1-9e64-eb23eaf98dc0" containerName="mariadb-account-create-update" Dec 15 05:51:56 crc kubenswrapper[4747]: I1215 05:51:56.918176 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="32d62287-c73f-47e1-9e64-eb23eaf98dc0" containerName="mariadb-account-create-update" Dec 15 05:51:56 crc kubenswrapper[4747]: E1215 05:51:56.918185 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd62cda8-66b6-4fe4-976d-0723d296a262" containerName="glance-db-sync" Dec 15 05:51:56 crc kubenswrapper[4747]: I1215 05:51:56.918191 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd62cda8-66b6-4fe4-976d-0723d296a262" containerName="glance-db-sync" Dec 15 05:51:56 crc kubenswrapper[4747]: E1215 05:51:56.918202 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aa0c3cd-1263-4dc6-9d30-b58efa93e393" containerName="mariadb-database-create" Dec 15 05:51:56 crc kubenswrapper[4747]: I1215 05:51:56.918207 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aa0c3cd-1263-4dc6-9d30-b58efa93e393" containerName="mariadb-database-create" Dec 15 05:51:56 crc kubenswrapper[4747]: E1215 05:51:56.918221 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9be93f4-5113-41ec-9604-4142d479155d" containerName="mariadb-database-create" Dec 15 05:51:56 crc kubenswrapper[4747]: I1215 05:51:56.918227 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9be93f4-5113-41ec-9604-4142d479155d" containerName="mariadb-database-create" Dec 15 05:51:56 crc kubenswrapper[4747]: E1215 05:51:56.918244 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24e3063e-4cb4-4695-9f2a-0a26592ec3cf" containerName="mariadb-database-create" Dec 15 05:51:56 crc kubenswrapper[4747]: I1215 05:51:56.918249 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="24e3063e-4cb4-4695-9f2a-0a26592ec3cf" containerName="mariadb-database-create" Dec 15 05:51:56 crc kubenswrapper[4747]: E1215 05:51:56.918257 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="961c6dcd-1a16-4c9c-90f4-1ec4325e3512" containerName="mariadb-account-create-update" Dec 15 05:51:56 crc kubenswrapper[4747]: I1215 05:51:56.918262 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="961c6dcd-1a16-4c9c-90f4-1ec4325e3512" containerName="mariadb-account-create-update" Dec 15 05:51:56 crc kubenswrapper[4747]: E1215 05:51:56.918277 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33c060b9-802c-43ba-8020-2bf8afda93ce" containerName="mariadb-account-create-update" Dec 15 05:51:56 crc kubenswrapper[4747]: I1215 05:51:56.918283 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="33c060b9-802c-43ba-8020-2bf8afda93ce" containerName="mariadb-account-create-update" Dec 15 05:51:56 crc kubenswrapper[4747]: I1215 05:51:56.918428 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd62cda8-66b6-4fe4-976d-0723d296a262" containerName="glance-db-sync" Dec 15 05:51:56 crc kubenswrapper[4747]: I1215 05:51:56.918447 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="4aa0c3cd-1263-4dc6-9d30-b58efa93e393" containerName="mariadb-database-create" Dec 15 05:51:56 crc kubenswrapper[4747]: I1215 05:51:56.918457 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="24e3063e-4cb4-4695-9f2a-0a26592ec3cf" containerName="mariadb-database-create" Dec 15 05:51:56 crc kubenswrapper[4747]: I1215 05:51:56.918464 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="32d62287-c73f-47e1-9e64-eb23eaf98dc0" containerName="mariadb-account-create-update" Dec 15 05:51:56 crc kubenswrapper[4747]: I1215 05:51:56.918470 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9be93f4-5113-41ec-9604-4142d479155d" containerName="mariadb-database-create" Dec 15 05:51:56 crc kubenswrapper[4747]: I1215 05:51:56.918477 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="33c060b9-802c-43ba-8020-2bf8afda93ce" containerName="mariadb-account-create-update" Dec 15 05:51:56 crc kubenswrapper[4747]: I1215 05:51:56.918484 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="961c6dcd-1a16-4c9c-90f4-1ec4325e3512" containerName="mariadb-account-create-update" Dec 15 05:51:56 crc kubenswrapper[4747]: I1215 05:51:56.920597 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6db75979dc-kxl2c" Dec 15 05:51:56 crc kubenswrapper[4747]: I1215 05:51:56.963068 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6db75979dc-kxl2c"] Dec 15 05:51:57 crc kubenswrapper[4747]: I1215 05:51:57.008941 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ad61e1e-65fd-40b7-be7f-3e2dc679aee8-dns-swift-storage-0\") pod \"dnsmasq-dns-6db75979dc-kxl2c\" (UID: \"8ad61e1e-65fd-40b7-be7f-3e2dc679aee8\") " pod="openstack/dnsmasq-dns-6db75979dc-kxl2c" Dec 15 05:51:57 crc kubenswrapper[4747]: I1215 05:51:57.008999 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ad61e1e-65fd-40b7-be7f-3e2dc679aee8-dns-svc\") pod \"dnsmasq-dns-6db75979dc-kxl2c\" (UID: \"8ad61e1e-65fd-40b7-be7f-3e2dc679aee8\") " pod="openstack/dnsmasq-dns-6db75979dc-kxl2c" Dec 15 05:51:57 crc kubenswrapper[4747]: I1215 05:51:57.009025 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ad61e1e-65fd-40b7-be7f-3e2dc679aee8-ovsdbserver-nb\") pod \"dnsmasq-dns-6db75979dc-kxl2c\" (UID: \"8ad61e1e-65fd-40b7-be7f-3e2dc679aee8\") " pod="openstack/dnsmasq-dns-6db75979dc-kxl2c" Dec 15 05:51:57 crc kubenswrapper[4747]: I1215 05:51:57.009100 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsfbt\" (UniqueName: \"kubernetes.io/projected/8ad61e1e-65fd-40b7-be7f-3e2dc679aee8-kube-api-access-bsfbt\") pod \"dnsmasq-dns-6db75979dc-kxl2c\" (UID: \"8ad61e1e-65fd-40b7-be7f-3e2dc679aee8\") " pod="openstack/dnsmasq-dns-6db75979dc-kxl2c" Dec 15 05:51:57 crc kubenswrapper[4747]: I1215 05:51:57.009193 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ad61e1e-65fd-40b7-be7f-3e2dc679aee8-config\") pod \"dnsmasq-dns-6db75979dc-kxl2c\" (UID: \"8ad61e1e-65fd-40b7-be7f-3e2dc679aee8\") " pod="openstack/dnsmasq-dns-6db75979dc-kxl2c" Dec 15 05:51:57 crc kubenswrapper[4747]: I1215 05:51:57.009224 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ad61e1e-65fd-40b7-be7f-3e2dc679aee8-ovsdbserver-sb\") pod \"dnsmasq-dns-6db75979dc-kxl2c\" (UID: \"8ad61e1e-65fd-40b7-be7f-3e2dc679aee8\") " pod="openstack/dnsmasq-dns-6db75979dc-kxl2c" Dec 15 05:51:57 crc kubenswrapper[4747]: I1215 05:51:57.110775 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ad61e1e-65fd-40b7-be7f-3e2dc679aee8-dns-swift-storage-0\") pod \"dnsmasq-dns-6db75979dc-kxl2c\" (UID: \"8ad61e1e-65fd-40b7-be7f-3e2dc679aee8\") " pod="openstack/dnsmasq-dns-6db75979dc-kxl2c" Dec 15 05:51:57 crc kubenswrapper[4747]: I1215 05:51:57.110816 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ad61e1e-65fd-40b7-be7f-3e2dc679aee8-dns-svc\") pod \"dnsmasq-dns-6db75979dc-kxl2c\" (UID: \"8ad61e1e-65fd-40b7-be7f-3e2dc679aee8\") " pod="openstack/dnsmasq-dns-6db75979dc-kxl2c" Dec 15 05:51:57 crc kubenswrapper[4747]: I1215 05:51:57.110833 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ad61e1e-65fd-40b7-be7f-3e2dc679aee8-ovsdbserver-nb\") pod \"dnsmasq-dns-6db75979dc-kxl2c\" (UID: \"8ad61e1e-65fd-40b7-be7f-3e2dc679aee8\") " pod="openstack/dnsmasq-dns-6db75979dc-kxl2c" Dec 15 05:51:57 crc kubenswrapper[4747]: I1215 05:51:57.110871 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsfbt\" (UniqueName: \"kubernetes.io/projected/8ad61e1e-65fd-40b7-be7f-3e2dc679aee8-kube-api-access-bsfbt\") pod \"dnsmasq-dns-6db75979dc-kxl2c\" (UID: \"8ad61e1e-65fd-40b7-be7f-3e2dc679aee8\") " pod="openstack/dnsmasq-dns-6db75979dc-kxl2c" Dec 15 05:51:57 crc kubenswrapper[4747]: I1215 05:51:57.110908 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ad61e1e-65fd-40b7-be7f-3e2dc679aee8-config\") pod \"dnsmasq-dns-6db75979dc-kxl2c\" (UID: \"8ad61e1e-65fd-40b7-be7f-3e2dc679aee8\") " pod="openstack/dnsmasq-dns-6db75979dc-kxl2c" Dec 15 05:51:57 crc kubenswrapper[4747]: I1215 05:51:57.110943 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ad61e1e-65fd-40b7-be7f-3e2dc679aee8-ovsdbserver-sb\") pod \"dnsmasq-dns-6db75979dc-kxl2c\" (UID: \"8ad61e1e-65fd-40b7-be7f-3e2dc679aee8\") " pod="openstack/dnsmasq-dns-6db75979dc-kxl2c" Dec 15 05:51:57 crc kubenswrapper[4747]: I1215 05:51:57.111640 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ad61e1e-65fd-40b7-be7f-3e2dc679aee8-dns-swift-storage-0\") pod \"dnsmasq-dns-6db75979dc-kxl2c\" (UID: \"8ad61e1e-65fd-40b7-be7f-3e2dc679aee8\") " pod="openstack/dnsmasq-dns-6db75979dc-kxl2c" Dec 15 05:51:57 crc kubenswrapper[4747]: I1215 05:51:57.111704 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ad61e1e-65fd-40b7-be7f-3e2dc679aee8-ovsdbserver-sb\") pod \"dnsmasq-dns-6db75979dc-kxl2c\" (UID: \"8ad61e1e-65fd-40b7-be7f-3e2dc679aee8\") " pod="openstack/dnsmasq-dns-6db75979dc-kxl2c" Dec 15 05:51:57 crc kubenswrapper[4747]: I1215 05:51:57.111863 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ad61e1e-65fd-40b7-be7f-3e2dc679aee8-dns-svc\") pod \"dnsmasq-dns-6db75979dc-kxl2c\" (UID: \"8ad61e1e-65fd-40b7-be7f-3e2dc679aee8\") " pod="openstack/dnsmasq-dns-6db75979dc-kxl2c" Dec 15 05:51:57 crc kubenswrapper[4747]: I1215 05:51:57.112670 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ad61e1e-65fd-40b7-be7f-3e2dc679aee8-config\") pod \"dnsmasq-dns-6db75979dc-kxl2c\" (UID: \"8ad61e1e-65fd-40b7-be7f-3e2dc679aee8\") " pod="openstack/dnsmasq-dns-6db75979dc-kxl2c" Dec 15 05:51:57 crc kubenswrapper[4747]: I1215 05:51:57.114172 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ad61e1e-65fd-40b7-be7f-3e2dc679aee8-ovsdbserver-nb\") pod \"dnsmasq-dns-6db75979dc-kxl2c\" (UID: \"8ad61e1e-65fd-40b7-be7f-3e2dc679aee8\") " pod="openstack/dnsmasq-dns-6db75979dc-kxl2c" Dec 15 05:51:57 crc kubenswrapper[4747]: I1215 05:51:57.127599 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsfbt\" (UniqueName: \"kubernetes.io/projected/8ad61e1e-65fd-40b7-be7f-3e2dc679aee8-kube-api-access-bsfbt\") pod \"dnsmasq-dns-6db75979dc-kxl2c\" (UID: \"8ad61e1e-65fd-40b7-be7f-3e2dc679aee8\") " pod="openstack/dnsmasq-dns-6db75979dc-kxl2c" Dec 15 05:51:57 crc kubenswrapper[4747]: I1215 05:51:57.300801 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6db75979dc-kxl2c" Dec 15 05:51:57 crc kubenswrapper[4747]: I1215 05:51:57.562432 4747 generic.go:334] "Generic (PLEG): container finished" podID="983e9d03-796d-4072-adad-cfe1797ae364" containerID="f01c886861b4c40ddd45d01696eee29a56f55472df45c6105920abbc84dbb844" exitCode=0 Dec 15 05:51:57 crc kubenswrapper[4747]: I1215 05:51:57.562493 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-667974f6c9-qrzph" event={"ID":"983e9d03-796d-4072-adad-cfe1797ae364","Type":"ContainerDied","Data":"f01c886861b4c40ddd45d01696eee29a56f55472df45c6105920abbc84dbb844"} Dec 15 05:51:57 crc kubenswrapper[4747]: I1215 05:51:57.936737 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-667974f6c9-qrzph" podUID="983e9d03-796d-4072-adad-cfe1797ae364" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.120:5353: connect: connection refused" Dec 15 05:51:59 crc kubenswrapper[4747]: I1215 05:51:59.459223 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-667974f6c9-qrzph" Dec 15 05:51:59 crc kubenswrapper[4747]: I1215 05:51:59.569838 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/983e9d03-796d-4072-adad-cfe1797ae364-config\") pod \"983e9d03-796d-4072-adad-cfe1797ae364\" (UID: \"983e9d03-796d-4072-adad-cfe1797ae364\") " Dec 15 05:51:59 crc kubenswrapper[4747]: I1215 05:51:59.570052 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/983e9d03-796d-4072-adad-cfe1797ae364-ovsdbserver-sb\") pod \"983e9d03-796d-4072-adad-cfe1797ae364\" (UID: \"983e9d03-796d-4072-adad-cfe1797ae364\") " Dec 15 05:51:59 crc kubenswrapper[4747]: I1215 05:51:59.570077 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/983e9d03-796d-4072-adad-cfe1797ae364-ovsdbserver-nb\") pod \"983e9d03-796d-4072-adad-cfe1797ae364\" (UID: \"983e9d03-796d-4072-adad-cfe1797ae364\") " Dec 15 05:51:59 crc kubenswrapper[4747]: I1215 05:51:59.570147 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59mgf\" (UniqueName: \"kubernetes.io/projected/983e9d03-796d-4072-adad-cfe1797ae364-kube-api-access-59mgf\") pod \"983e9d03-796d-4072-adad-cfe1797ae364\" (UID: \"983e9d03-796d-4072-adad-cfe1797ae364\") " Dec 15 05:51:59 crc kubenswrapper[4747]: I1215 05:51:59.570247 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/983e9d03-796d-4072-adad-cfe1797ae364-dns-swift-storage-0\") pod \"983e9d03-796d-4072-adad-cfe1797ae364\" (UID: \"983e9d03-796d-4072-adad-cfe1797ae364\") " Dec 15 05:51:59 crc kubenswrapper[4747]: I1215 05:51:59.570285 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/983e9d03-796d-4072-adad-cfe1797ae364-dns-svc\") pod \"983e9d03-796d-4072-adad-cfe1797ae364\" (UID: \"983e9d03-796d-4072-adad-cfe1797ae364\") " Dec 15 05:51:59 crc kubenswrapper[4747]: I1215 05:51:59.575649 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/983e9d03-796d-4072-adad-cfe1797ae364-kube-api-access-59mgf" (OuterVolumeSpecName: "kube-api-access-59mgf") pod "983e9d03-796d-4072-adad-cfe1797ae364" (UID: "983e9d03-796d-4072-adad-cfe1797ae364"). InnerVolumeSpecName "kube-api-access-59mgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:51:59 crc kubenswrapper[4747]: I1215 05:51:59.593271 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-667974f6c9-qrzph" event={"ID":"983e9d03-796d-4072-adad-cfe1797ae364","Type":"ContainerDied","Data":"a54c0b33f1c6be07f9a726c1753efe7bc0931653cb0e4e858a22f0541c194635"} Dec 15 05:51:59 crc kubenswrapper[4747]: I1215 05:51:59.593309 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-667974f6c9-qrzph" Dec 15 05:51:59 crc kubenswrapper[4747]: I1215 05:51:59.593356 4747 scope.go:117] "RemoveContainer" containerID="f01c886861b4c40ddd45d01696eee29a56f55472df45c6105920abbc84dbb844" Dec 15 05:51:59 crc kubenswrapper[4747]: I1215 05:51:59.598632 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-xf272" event={"ID":"25a4a3fa-f57f-41f5-9f10-664cf17f38c1","Type":"ContainerStarted","Data":"b98f828ff6c111644fdcdcd2cf12c43c91b6c25362947028543d792b3d955dee"} Dec 15 05:51:59 crc kubenswrapper[4747]: I1215 05:51:59.607536 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/983e9d03-796d-4072-adad-cfe1797ae364-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "983e9d03-796d-4072-adad-cfe1797ae364" (UID: "983e9d03-796d-4072-adad-cfe1797ae364"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:51:59 crc kubenswrapper[4747]: I1215 05:51:59.617590 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/983e9d03-796d-4072-adad-cfe1797ae364-config" (OuterVolumeSpecName: "config") pod "983e9d03-796d-4072-adad-cfe1797ae364" (UID: "983e9d03-796d-4072-adad-cfe1797ae364"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:51:59 crc kubenswrapper[4747]: I1215 05:51:59.619510 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/983e9d03-796d-4072-adad-cfe1797ae364-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "983e9d03-796d-4072-adad-cfe1797ae364" (UID: "983e9d03-796d-4072-adad-cfe1797ae364"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:51:59 crc kubenswrapper[4747]: I1215 05:51:59.621118 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-xf272" podStartSLOduration=2.10456695 podStartE2EDuration="8.621080324s" podCreationTimestamp="2025-12-15 05:51:51 +0000 UTC" firstStartedPulling="2025-12-15 05:51:52.788854241 +0000 UTC m=+876.485366158" lastFinishedPulling="2025-12-15 05:51:59.305367615 +0000 UTC m=+883.001879532" observedRunningTime="2025-12-15 05:51:59.615548761 +0000 UTC m=+883.312060678" watchObservedRunningTime="2025-12-15 05:51:59.621080324 +0000 UTC m=+883.317592240" Dec 15 05:51:59 crc kubenswrapper[4747]: I1215 05:51:59.622673 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/983e9d03-796d-4072-adad-cfe1797ae364-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "983e9d03-796d-4072-adad-cfe1797ae364" (UID: "983e9d03-796d-4072-adad-cfe1797ae364"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:51:59 crc kubenswrapper[4747]: I1215 05:51:59.625361 4747 scope.go:117] "RemoveContainer" containerID="27d48a4471e86b4b47005d8d6711b0505780863300e2028f5966ec7f211e5d6d" Dec 15 05:51:59 crc kubenswrapper[4747]: I1215 05:51:59.627495 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/983e9d03-796d-4072-adad-cfe1797ae364-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "983e9d03-796d-4072-adad-cfe1797ae364" (UID: "983e9d03-796d-4072-adad-cfe1797ae364"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:51:59 crc kubenswrapper[4747]: I1215 05:51:59.673169 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/983e9d03-796d-4072-adad-cfe1797ae364-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:59 crc kubenswrapper[4747]: I1215 05:51:59.673555 4747 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/983e9d03-796d-4072-adad-cfe1797ae364-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:59 crc kubenswrapper[4747]: I1215 05:51:59.673572 4747 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/983e9d03-796d-4072-adad-cfe1797ae364-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:59 crc kubenswrapper[4747]: I1215 05:51:59.673584 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59mgf\" (UniqueName: \"kubernetes.io/projected/983e9d03-796d-4072-adad-cfe1797ae364-kube-api-access-59mgf\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:59 crc kubenswrapper[4747]: I1215 05:51:59.673595 4747 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/983e9d03-796d-4072-adad-cfe1797ae364-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:59 crc kubenswrapper[4747]: I1215 05:51:59.673605 4747 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/983e9d03-796d-4072-adad-cfe1797ae364-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 15 05:51:59 crc kubenswrapper[4747]: I1215 05:51:59.699570 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6db75979dc-kxl2c"] Dec 15 05:51:59 crc kubenswrapper[4747]: W1215 05:51:59.701789 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ad61e1e_65fd_40b7_be7f_3e2dc679aee8.slice/crio-03c75141a90a51886bc53aad5e3222c9f6ef08073feacc5a02f4f972a6cfce5e WatchSource:0}: Error finding container 03c75141a90a51886bc53aad5e3222c9f6ef08073feacc5a02f4f972a6cfce5e: Status 404 returned error can't find the container with id 03c75141a90a51886bc53aad5e3222c9f6ef08073feacc5a02f4f972a6cfce5e Dec 15 05:51:59 crc kubenswrapper[4747]: I1215 05:51:59.967788 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-667974f6c9-qrzph"] Dec 15 05:51:59 crc kubenswrapper[4747]: I1215 05:51:59.972260 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-667974f6c9-qrzph"] Dec 15 05:52:00 crc kubenswrapper[4747]: I1215 05:52:00.609116 4747 generic.go:334] "Generic (PLEG): container finished" podID="8ad61e1e-65fd-40b7-be7f-3e2dc679aee8" containerID="bce32a47ded452c6219b2f943f253bdc3274d4289b7867fa9975e28997f7b3d9" exitCode=0 Dec 15 05:52:00 crc kubenswrapper[4747]: I1215 05:52:00.609316 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6db75979dc-kxl2c" event={"ID":"8ad61e1e-65fd-40b7-be7f-3e2dc679aee8","Type":"ContainerDied","Data":"bce32a47ded452c6219b2f943f253bdc3274d4289b7867fa9975e28997f7b3d9"} Dec 15 05:52:00 crc kubenswrapper[4747]: I1215 05:52:00.609450 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6db75979dc-kxl2c" event={"ID":"8ad61e1e-65fd-40b7-be7f-3e2dc679aee8","Type":"ContainerStarted","Data":"03c75141a90a51886bc53aad5e3222c9f6ef08073feacc5a02f4f972a6cfce5e"} Dec 15 05:52:00 crc kubenswrapper[4747]: I1215 05:52:00.650781 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="983e9d03-796d-4072-adad-cfe1797ae364" path="/var/lib/kubelet/pods/983e9d03-796d-4072-adad-cfe1797ae364/volumes" Dec 15 05:52:01 crc kubenswrapper[4747]: I1215 05:52:01.174935 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qs2mw"] Dec 15 05:52:01 crc kubenswrapper[4747]: E1215 05:52:01.175520 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="983e9d03-796d-4072-adad-cfe1797ae364" containerName="dnsmasq-dns" Dec 15 05:52:01 crc kubenswrapper[4747]: I1215 05:52:01.175536 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="983e9d03-796d-4072-adad-cfe1797ae364" containerName="dnsmasq-dns" Dec 15 05:52:01 crc kubenswrapper[4747]: E1215 05:52:01.175570 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="983e9d03-796d-4072-adad-cfe1797ae364" containerName="init" Dec 15 05:52:01 crc kubenswrapper[4747]: I1215 05:52:01.175578 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="983e9d03-796d-4072-adad-cfe1797ae364" containerName="init" Dec 15 05:52:01 crc kubenswrapper[4747]: I1215 05:52:01.175720 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="983e9d03-796d-4072-adad-cfe1797ae364" containerName="dnsmasq-dns" Dec 15 05:52:01 crc kubenswrapper[4747]: I1215 05:52:01.176823 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qs2mw" Dec 15 05:52:01 crc kubenswrapper[4747]: I1215 05:52:01.186559 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qs2mw"] Dec 15 05:52:01 crc kubenswrapper[4747]: I1215 05:52:01.200700 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c603fa2b-48da-497f-82c0-9929a9e155a6-utilities\") pod \"redhat-marketplace-qs2mw\" (UID: \"c603fa2b-48da-497f-82c0-9929a9e155a6\") " pod="openshift-marketplace/redhat-marketplace-qs2mw" Dec 15 05:52:01 crc kubenswrapper[4747]: I1215 05:52:01.200803 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c603fa2b-48da-497f-82c0-9929a9e155a6-catalog-content\") pod \"redhat-marketplace-qs2mw\" (UID: \"c603fa2b-48da-497f-82c0-9929a9e155a6\") " pod="openshift-marketplace/redhat-marketplace-qs2mw" Dec 15 05:52:01 crc kubenswrapper[4747]: I1215 05:52:01.200892 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4s6n\" (UniqueName: \"kubernetes.io/projected/c603fa2b-48da-497f-82c0-9929a9e155a6-kube-api-access-r4s6n\") pod \"redhat-marketplace-qs2mw\" (UID: \"c603fa2b-48da-497f-82c0-9929a9e155a6\") " pod="openshift-marketplace/redhat-marketplace-qs2mw" Dec 15 05:52:01 crc kubenswrapper[4747]: I1215 05:52:01.302352 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4s6n\" (UniqueName: \"kubernetes.io/projected/c603fa2b-48da-497f-82c0-9929a9e155a6-kube-api-access-r4s6n\") pod \"redhat-marketplace-qs2mw\" (UID: \"c603fa2b-48da-497f-82c0-9929a9e155a6\") " pod="openshift-marketplace/redhat-marketplace-qs2mw" Dec 15 05:52:01 crc kubenswrapper[4747]: I1215 05:52:01.302586 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c603fa2b-48da-497f-82c0-9929a9e155a6-utilities\") pod \"redhat-marketplace-qs2mw\" (UID: \"c603fa2b-48da-497f-82c0-9929a9e155a6\") " pod="openshift-marketplace/redhat-marketplace-qs2mw" Dec 15 05:52:01 crc kubenswrapper[4747]: I1215 05:52:01.302611 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c603fa2b-48da-497f-82c0-9929a9e155a6-catalog-content\") pod \"redhat-marketplace-qs2mw\" (UID: \"c603fa2b-48da-497f-82c0-9929a9e155a6\") " pod="openshift-marketplace/redhat-marketplace-qs2mw" Dec 15 05:52:01 crc kubenswrapper[4747]: I1215 05:52:01.303050 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c603fa2b-48da-497f-82c0-9929a9e155a6-catalog-content\") pod \"redhat-marketplace-qs2mw\" (UID: \"c603fa2b-48da-497f-82c0-9929a9e155a6\") " pod="openshift-marketplace/redhat-marketplace-qs2mw" Dec 15 05:52:01 crc kubenswrapper[4747]: I1215 05:52:01.303235 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c603fa2b-48da-497f-82c0-9929a9e155a6-utilities\") pod \"redhat-marketplace-qs2mw\" (UID: \"c603fa2b-48da-497f-82c0-9929a9e155a6\") " pod="openshift-marketplace/redhat-marketplace-qs2mw" Dec 15 05:52:01 crc kubenswrapper[4747]: I1215 05:52:01.317800 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4s6n\" (UniqueName: \"kubernetes.io/projected/c603fa2b-48da-497f-82c0-9929a9e155a6-kube-api-access-r4s6n\") pod \"redhat-marketplace-qs2mw\" (UID: \"c603fa2b-48da-497f-82c0-9929a9e155a6\") " pod="openshift-marketplace/redhat-marketplace-qs2mw" Dec 15 05:52:01 crc kubenswrapper[4747]: I1215 05:52:01.491824 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qs2mw" Dec 15 05:52:01 crc kubenswrapper[4747]: I1215 05:52:01.617896 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6db75979dc-kxl2c" event={"ID":"8ad61e1e-65fd-40b7-be7f-3e2dc679aee8","Type":"ContainerStarted","Data":"254e5a55b075192d212d437c5a8c992cfe8b47bcddc1542d6080110031c4c6d2"} Dec 15 05:52:01 crc kubenswrapper[4747]: I1215 05:52:01.618176 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6db75979dc-kxl2c" Dec 15 05:52:01 crc kubenswrapper[4747]: I1215 05:52:01.641499 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6db75979dc-kxl2c" podStartSLOduration=5.641480754 podStartE2EDuration="5.641480754s" podCreationTimestamp="2025-12-15 05:51:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:52:01.636701236 +0000 UTC m=+885.333213152" watchObservedRunningTime="2025-12-15 05:52:01.641480754 +0000 UTC m=+885.337992671" Dec 15 05:52:01 crc kubenswrapper[4747]: I1215 05:52:01.911754 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qs2mw"] Dec 15 05:52:01 crc kubenswrapper[4747]: W1215 05:52:01.914614 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc603fa2b_48da_497f_82c0_9929a9e155a6.slice/crio-54d1e259eb4a473fb06b074f3bb757d027c8eab94c7fd280cd23d4a61ec2a8aa WatchSource:0}: Error finding container 54d1e259eb4a473fb06b074f3bb757d027c8eab94c7fd280cd23d4a61ec2a8aa: Status 404 returned error can't find the container with id 54d1e259eb4a473fb06b074f3bb757d027c8eab94c7fd280cd23d4a61ec2a8aa Dec 15 05:52:02 crc kubenswrapper[4747]: I1215 05:52:02.630784 4747 generic.go:334] "Generic (PLEG): container finished" podID="25a4a3fa-f57f-41f5-9f10-664cf17f38c1" containerID="b98f828ff6c111644fdcdcd2cf12c43c91b6c25362947028543d792b3d955dee" exitCode=0 Dec 15 05:52:02 crc kubenswrapper[4747]: I1215 05:52:02.633166 4747 generic.go:334] "Generic (PLEG): container finished" podID="c603fa2b-48da-497f-82c0-9929a9e155a6" containerID="2389af3df909f6df9adbcfd83fcf3e2d233def84c20f022f146bcc8143e3f915" exitCode=0 Dec 15 05:52:02 crc kubenswrapper[4747]: I1215 05:52:02.639464 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-xf272" event={"ID":"25a4a3fa-f57f-41f5-9f10-664cf17f38c1","Type":"ContainerDied","Data":"b98f828ff6c111644fdcdcd2cf12c43c91b6c25362947028543d792b3d955dee"} Dec 15 05:52:02 crc kubenswrapper[4747]: I1215 05:52:02.639522 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qs2mw" event={"ID":"c603fa2b-48da-497f-82c0-9929a9e155a6","Type":"ContainerDied","Data":"2389af3df909f6df9adbcfd83fcf3e2d233def84c20f022f146bcc8143e3f915"} Dec 15 05:52:02 crc kubenswrapper[4747]: I1215 05:52:02.639552 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qs2mw" event={"ID":"c603fa2b-48da-497f-82c0-9929a9e155a6","Type":"ContainerStarted","Data":"54d1e259eb4a473fb06b074f3bb757d027c8eab94c7fd280cd23d4a61ec2a8aa"} Dec 15 05:52:03 crc kubenswrapper[4747]: I1215 05:52:03.905389 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-xf272" Dec 15 05:52:03 crc kubenswrapper[4747]: I1215 05:52:03.948567 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25a4a3fa-f57f-41f5-9f10-664cf17f38c1-combined-ca-bundle\") pod \"25a4a3fa-f57f-41f5-9f10-664cf17f38c1\" (UID: \"25a4a3fa-f57f-41f5-9f10-664cf17f38c1\") " Dec 15 05:52:03 crc kubenswrapper[4747]: I1215 05:52:03.948734 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25a4a3fa-f57f-41f5-9f10-664cf17f38c1-config-data\") pod \"25a4a3fa-f57f-41f5-9f10-664cf17f38c1\" (UID: \"25a4a3fa-f57f-41f5-9f10-664cf17f38c1\") " Dec 15 05:52:03 crc kubenswrapper[4747]: I1215 05:52:03.948796 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2v24\" (UniqueName: \"kubernetes.io/projected/25a4a3fa-f57f-41f5-9f10-664cf17f38c1-kube-api-access-k2v24\") pod \"25a4a3fa-f57f-41f5-9f10-664cf17f38c1\" (UID: \"25a4a3fa-f57f-41f5-9f10-664cf17f38c1\") " Dec 15 05:52:03 crc kubenswrapper[4747]: I1215 05:52:03.955330 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25a4a3fa-f57f-41f5-9f10-664cf17f38c1-kube-api-access-k2v24" (OuterVolumeSpecName: "kube-api-access-k2v24") pod "25a4a3fa-f57f-41f5-9f10-664cf17f38c1" (UID: "25a4a3fa-f57f-41f5-9f10-664cf17f38c1"). InnerVolumeSpecName "kube-api-access-k2v24". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:52:03 crc kubenswrapper[4747]: I1215 05:52:03.972308 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25a4a3fa-f57f-41f5-9f10-664cf17f38c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25a4a3fa-f57f-41f5-9f10-664cf17f38c1" (UID: "25a4a3fa-f57f-41f5-9f10-664cf17f38c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:52:03 crc kubenswrapper[4747]: I1215 05:52:03.994726 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25a4a3fa-f57f-41f5-9f10-664cf17f38c1-config-data" (OuterVolumeSpecName: "config-data") pod "25a4a3fa-f57f-41f5-9f10-664cf17f38c1" (UID: "25a4a3fa-f57f-41f5-9f10-664cf17f38c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:52:04 crc kubenswrapper[4747]: I1215 05:52:04.051453 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25a4a3fa-f57f-41f5-9f10-664cf17f38c1-config-data\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:04 crc kubenswrapper[4747]: I1215 05:52:04.051483 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2v24\" (UniqueName: \"kubernetes.io/projected/25a4a3fa-f57f-41f5-9f10-664cf17f38c1-kube-api-access-k2v24\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:04 crc kubenswrapper[4747]: I1215 05:52:04.051499 4747 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25a4a3fa-f57f-41f5-9f10-664cf17f38c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:04 crc kubenswrapper[4747]: I1215 05:52:04.656278 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-xf272" Dec 15 05:52:04 crc kubenswrapper[4747]: I1215 05:52:04.656515 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-xf272" event={"ID":"25a4a3fa-f57f-41f5-9f10-664cf17f38c1","Type":"ContainerDied","Data":"2443b03ed495f03962a0064efb4272fe81696132f7d4b4450cdc5d732b394891"} Dec 15 05:52:04 crc kubenswrapper[4747]: I1215 05:52:04.656562 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2443b03ed495f03962a0064efb4272fe81696132f7d4b4450cdc5d732b394891" Dec 15 05:52:04 crc kubenswrapper[4747]: I1215 05:52:04.658623 4747 generic.go:334] "Generic (PLEG): container finished" podID="c603fa2b-48da-497f-82c0-9929a9e155a6" containerID="96764e5cb0ee9e5dc6a769849b8b4a6c1902d310dd77825ee1d4e43451750591" exitCode=0 Dec 15 05:52:04 crc kubenswrapper[4747]: I1215 05:52:04.658673 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qs2mw" event={"ID":"c603fa2b-48da-497f-82c0-9929a9e155a6","Type":"ContainerDied","Data":"96764e5cb0ee9e5dc6a769849b8b4a6c1902d310dd77825ee1d4e43451750591"} Dec 15 05:52:04 crc kubenswrapper[4747]: I1215 05:52:04.912995 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6db75979dc-kxl2c"] Dec 15 05:52:04 crc kubenswrapper[4747]: I1215 05:52:04.913525 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6db75979dc-kxl2c" podUID="8ad61e1e-65fd-40b7-be7f-3e2dc679aee8" containerName="dnsmasq-dns" containerID="cri-o://254e5a55b075192d212d437c5a8c992cfe8b47bcddc1542d6080110031c4c6d2" gracePeriod=10 Dec 15 05:52:04 crc kubenswrapper[4747]: I1215 05:52:04.928135 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-4nb2f"] Dec 15 05:52:04 crc kubenswrapper[4747]: E1215 05:52:04.928595 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25a4a3fa-f57f-41f5-9f10-664cf17f38c1" containerName="keystone-db-sync" Dec 15 05:52:04 crc kubenswrapper[4747]: I1215 05:52:04.928618 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="25a4a3fa-f57f-41f5-9f10-664cf17f38c1" containerName="keystone-db-sync" Dec 15 05:52:04 crc kubenswrapper[4747]: I1215 05:52:04.928834 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="25a4a3fa-f57f-41f5-9f10-664cf17f38c1" containerName="keystone-db-sync" Dec 15 05:52:04 crc kubenswrapper[4747]: I1215 05:52:04.929491 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4nb2f" Dec 15 05:52:04 crc kubenswrapper[4747]: I1215 05:52:04.934346 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 15 05:52:04 crc kubenswrapper[4747]: I1215 05:52:04.935000 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 15 05:52:04 crc kubenswrapper[4747]: I1215 05:52:04.935807 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-x9mhm" Dec 15 05:52:04 crc kubenswrapper[4747]: I1215 05:52:04.936150 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 15 05:52:04 crc kubenswrapper[4747]: I1215 05:52:04.936723 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 15 05:52:04 crc kubenswrapper[4747]: I1215 05:52:04.952418 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4nb2f"] Dec 15 05:52:04 crc kubenswrapper[4747]: I1215 05:52:04.961231 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57c77444c9-2658j"] Dec 15 05:52:04 crc kubenswrapper[4747]: I1215 05:52:04.967371 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c77444c9-2658j" Dec 15 05:52:04 crc kubenswrapper[4747]: I1215 05:52:04.967594 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51fe6b5c-64f2-40fd-aa82-c8b615c8d198-scripts\") pod \"keystone-bootstrap-4nb2f\" (UID: \"51fe6b5c-64f2-40fd-aa82-c8b615c8d198\") " pod="openstack/keystone-bootstrap-4nb2f" Dec 15 05:52:04 crc kubenswrapper[4747]: I1215 05:52:04.967678 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51fe6b5c-64f2-40fd-aa82-c8b615c8d198-config-data\") pod \"keystone-bootstrap-4nb2f\" (UID: \"51fe6b5c-64f2-40fd-aa82-c8b615c8d198\") " pod="openstack/keystone-bootstrap-4nb2f" Dec 15 05:52:04 crc kubenswrapper[4747]: I1215 05:52:04.967702 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/51fe6b5c-64f2-40fd-aa82-c8b615c8d198-fernet-keys\") pod \"keystone-bootstrap-4nb2f\" (UID: \"51fe6b5c-64f2-40fd-aa82-c8b615c8d198\") " pod="openstack/keystone-bootstrap-4nb2f" Dec 15 05:52:04 crc kubenswrapper[4747]: I1215 05:52:04.967798 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtqkf\" (UniqueName: \"kubernetes.io/projected/51fe6b5c-64f2-40fd-aa82-c8b615c8d198-kube-api-access-wtqkf\") pod \"keystone-bootstrap-4nb2f\" (UID: \"51fe6b5c-64f2-40fd-aa82-c8b615c8d198\") " pod="openstack/keystone-bootstrap-4nb2f" Dec 15 05:52:04 crc kubenswrapper[4747]: I1215 05:52:04.967971 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51fe6b5c-64f2-40fd-aa82-c8b615c8d198-combined-ca-bundle\") pod \"keystone-bootstrap-4nb2f\" (UID: \"51fe6b5c-64f2-40fd-aa82-c8b615c8d198\") " pod="openstack/keystone-bootstrap-4nb2f" Dec 15 05:52:04 crc kubenswrapper[4747]: I1215 05:52:04.968107 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/51fe6b5c-64f2-40fd-aa82-c8b615c8d198-credential-keys\") pod \"keystone-bootstrap-4nb2f\" (UID: \"51fe6b5c-64f2-40fd-aa82-c8b615c8d198\") " pod="openstack/keystone-bootstrap-4nb2f" Dec 15 05:52:04 crc kubenswrapper[4747]: I1215 05:52:04.980165 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57c77444c9-2658j"] Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.070169 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51fe6b5c-64f2-40fd-aa82-c8b615c8d198-combined-ca-bundle\") pod \"keystone-bootstrap-4nb2f\" (UID: \"51fe6b5c-64f2-40fd-aa82-c8b615c8d198\") " pod="openstack/keystone-bootstrap-4nb2f" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.070223 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3619d791-2346-42f8-8d22-65a669474273-ovsdbserver-sb\") pod \"dnsmasq-dns-57c77444c9-2658j\" (UID: \"3619d791-2346-42f8-8d22-65a669474273\") " pod="openstack/dnsmasq-dns-57c77444c9-2658j" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.070246 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3619d791-2346-42f8-8d22-65a669474273-config\") pod \"dnsmasq-dns-57c77444c9-2658j\" (UID: \"3619d791-2346-42f8-8d22-65a669474273\") " pod="openstack/dnsmasq-dns-57c77444c9-2658j" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.070265 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3619d791-2346-42f8-8d22-65a669474273-ovsdbserver-nb\") pod \"dnsmasq-dns-57c77444c9-2658j\" (UID: \"3619d791-2346-42f8-8d22-65a669474273\") " pod="openstack/dnsmasq-dns-57c77444c9-2658j" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.070282 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxrpg\" (UniqueName: \"kubernetes.io/projected/3619d791-2346-42f8-8d22-65a669474273-kube-api-access-fxrpg\") pod \"dnsmasq-dns-57c77444c9-2658j\" (UID: \"3619d791-2346-42f8-8d22-65a669474273\") " pod="openstack/dnsmasq-dns-57c77444c9-2658j" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.070334 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/51fe6b5c-64f2-40fd-aa82-c8b615c8d198-credential-keys\") pod \"keystone-bootstrap-4nb2f\" (UID: \"51fe6b5c-64f2-40fd-aa82-c8b615c8d198\") " pod="openstack/keystone-bootstrap-4nb2f" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.070350 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51fe6b5c-64f2-40fd-aa82-c8b615c8d198-scripts\") pod \"keystone-bootstrap-4nb2f\" (UID: \"51fe6b5c-64f2-40fd-aa82-c8b615c8d198\") " pod="openstack/keystone-bootstrap-4nb2f" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.070373 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3619d791-2346-42f8-8d22-65a669474273-dns-svc\") pod \"dnsmasq-dns-57c77444c9-2658j\" (UID: \"3619d791-2346-42f8-8d22-65a669474273\") " pod="openstack/dnsmasq-dns-57c77444c9-2658j" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.070407 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51fe6b5c-64f2-40fd-aa82-c8b615c8d198-config-data\") pod \"keystone-bootstrap-4nb2f\" (UID: \"51fe6b5c-64f2-40fd-aa82-c8b615c8d198\") " pod="openstack/keystone-bootstrap-4nb2f" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.070427 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/51fe6b5c-64f2-40fd-aa82-c8b615c8d198-fernet-keys\") pod \"keystone-bootstrap-4nb2f\" (UID: \"51fe6b5c-64f2-40fd-aa82-c8b615c8d198\") " pod="openstack/keystone-bootstrap-4nb2f" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.070469 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3619d791-2346-42f8-8d22-65a669474273-dns-swift-storage-0\") pod \"dnsmasq-dns-57c77444c9-2658j\" (UID: \"3619d791-2346-42f8-8d22-65a669474273\") " pod="openstack/dnsmasq-dns-57c77444c9-2658j" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.070499 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtqkf\" (UniqueName: \"kubernetes.io/projected/51fe6b5c-64f2-40fd-aa82-c8b615c8d198-kube-api-access-wtqkf\") pod \"keystone-bootstrap-4nb2f\" (UID: \"51fe6b5c-64f2-40fd-aa82-c8b615c8d198\") " pod="openstack/keystone-bootstrap-4nb2f" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.078075 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51fe6b5c-64f2-40fd-aa82-c8b615c8d198-combined-ca-bundle\") pod \"keystone-bootstrap-4nb2f\" (UID: \"51fe6b5c-64f2-40fd-aa82-c8b615c8d198\") " pod="openstack/keystone-bootstrap-4nb2f" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.078637 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51fe6b5c-64f2-40fd-aa82-c8b615c8d198-config-data\") pod \"keystone-bootstrap-4nb2f\" (UID: \"51fe6b5c-64f2-40fd-aa82-c8b615c8d198\") " pod="openstack/keystone-bootstrap-4nb2f" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.083332 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51fe6b5c-64f2-40fd-aa82-c8b615c8d198-scripts\") pod \"keystone-bootstrap-4nb2f\" (UID: \"51fe6b5c-64f2-40fd-aa82-c8b615c8d198\") " pod="openstack/keystone-bootstrap-4nb2f" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.083625 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/51fe6b5c-64f2-40fd-aa82-c8b615c8d198-credential-keys\") pod \"keystone-bootstrap-4nb2f\" (UID: \"51fe6b5c-64f2-40fd-aa82-c8b615c8d198\") " pod="openstack/keystone-bootstrap-4nb2f" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.087383 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/51fe6b5c-64f2-40fd-aa82-c8b615c8d198-fernet-keys\") pod \"keystone-bootstrap-4nb2f\" (UID: \"51fe6b5c-64f2-40fd-aa82-c8b615c8d198\") " pod="openstack/keystone-bootstrap-4nb2f" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.097424 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtqkf\" (UniqueName: \"kubernetes.io/projected/51fe6b5c-64f2-40fd-aa82-c8b615c8d198-kube-api-access-wtqkf\") pod \"keystone-bootstrap-4nb2f\" (UID: \"51fe6b5c-64f2-40fd-aa82-c8b615c8d198\") " pod="openstack/keystone-bootstrap-4nb2f" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.132845 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.134820 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.137102 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.140413 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.146954 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-4hlv6"] Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.153626 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4hlv6" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.155982 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-njzm6" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.156264 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.156410 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.163870 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.172120 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6-db-sync-config-data\") pod \"cinder-db-sync-4hlv6\" (UID: \"2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6\") " pod="openstack/cinder-db-sync-4hlv6" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.172211 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3619d791-2346-42f8-8d22-65a669474273-dns-swift-storage-0\") pod \"dnsmasq-dns-57c77444c9-2658j\" (UID: \"3619d791-2346-42f8-8d22-65a669474273\") " pod="openstack/dnsmasq-dns-57c77444c9-2658j" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.172248 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/873188a0-9dbb-4c95-b39e-cd503e07e59f-run-httpd\") pod \"ceilometer-0\" (UID: \"873188a0-9dbb-4c95-b39e-cd503e07e59f\") " pod="openstack/ceilometer-0" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.172274 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6-config-data\") pod \"cinder-db-sync-4hlv6\" (UID: \"2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6\") " pod="openstack/cinder-db-sync-4hlv6" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.172395 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6-etc-machine-id\") pod \"cinder-db-sync-4hlv6\" (UID: \"2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6\") " pod="openstack/cinder-db-sync-4hlv6" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.172445 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/873188a0-9dbb-4c95-b39e-cd503e07e59f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"873188a0-9dbb-4c95-b39e-cd503e07e59f\") " pod="openstack/ceilometer-0" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.172468 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/873188a0-9dbb-4c95-b39e-cd503e07e59f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"873188a0-9dbb-4c95-b39e-cd503e07e59f\") " pod="openstack/ceilometer-0" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.172493 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6-combined-ca-bundle\") pod \"cinder-db-sync-4hlv6\" (UID: \"2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6\") " pod="openstack/cinder-db-sync-4hlv6" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.172570 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3619d791-2346-42f8-8d22-65a669474273-config\") pod \"dnsmasq-dns-57c77444c9-2658j\" (UID: \"3619d791-2346-42f8-8d22-65a669474273\") " pod="openstack/dnsmasq-dns-57c77444c9-2658j" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.172592 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3619d791-2346-42f8-8d22-65a669474273-ovsdbserver-sb\") pod \"dnsmasq-dns-57c77444c9-2658j\" (UID: \"3619d791-2346-42f8-8d22-65a669474273\") " pod="openstack/dnsmasq-dns-57c77444c9-2658j" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.172625 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3619d791-2346-42f8-8d22-65a669474273-ovsdbserver-nb\") pod \"dnsmasq-dns-57c77444c9-2658j\" (UID: \"3619d791-2346-42f8-8d22-65a669474273\") " pod="openstack/dnsmasq-dns-57c77444c9-2658j" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.172656 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxrpg\" (UniqueName: \"kubernetes.io/projected/3619d791-2346-42f8-8d22-65a669474273-kube-api-access-fxrpg\") pod \"dnsmasq-dns-57c77444c9-2658j\" (UID: \"3619d791-2346-42f8-8d22-65a669474273\") " pod="openstack/dnsmasq-dns-57c77444c9-2658j" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.172702 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6-scripts\") pod \"cinder-db-sync-4hlv6\" (UID: \"2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6\") " pod="openstack/cinder-db-sync-4hlv6" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.172728 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/873188a0-9dbb-4c95-b39e-cd503e07e59f-scripts\") pod \"ceilometer-0\" (UID: \"873188a0-9dbb-4c95-b39e-cd503e07e59f\") " pod="openstack/ceilometer-0" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.172772 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/873188a0-9dbb-4c95-b39e-cd503e07e59f-log-httpd\") pod \"ceilometer-0\" (UID: \"873188a0-9dbb-4c95-b39e-cd503e07e59f\") " pod="openstack/ceilometer-0" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.172805 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9ktk\" (UniqueName: \"kubernetes.io/projected/873188a0-9dbb-4c95-b39e-cd503e07e59f-kube-api-access-b9ktk\") pod \"ceilometer-0\" (UID: \"873188a0-9dbb-4c95-b39e-cd503e07e59f\") " pod="openstack/ceilometer-0" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.172827 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ctpf\" (UniqueName: \"kubernetes.io/projected/2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6-kube-api-access-4ctpf\") pod \"cinder-db-sync-4hlv6\" (UID: \"2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6\") " pod="openstack/cinder-db-sync-4hlv6" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.172876 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/873188a0-9dbb-4c95-b39e-cd503e07e59f-config-data\") pod \"ceilometer-0\" (UID: \"873188a0-9dbb-4c95-b39e-cd503e07e59f\") " pod="openstack/ceilometer-0" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.172957 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3619d791-2346-42f8-8d22-65a669474273-dns-svc\") pod \"dnsmasq-dns-57c77444c9-2658j\" (UID: \"3619d791-2346-42f8-8d22-65a669474273\") " pod="openstack/dnsmasq-dns-57c77444c9-2658j" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.173238 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3619d791-2346-42f8-8d22-65a669474273-dns-swift-storage-0\") pod \"dnsmasq-dns-57c77444c9-2658j\" (UID: \"3619d791-2346-42f8-8d22-65a669474273\") " pod="openstack/dnsmasq-dns-57c77444c9-2658j" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.173289 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3619d791-2346-42f8-8d22-65a669474273-config\") pod \"dnsmasq-dns-57c77444c9-2658j\" (UID: \"3619d791-2346-42f8-8d22-65a669474273\") " pod="openstack/dnsmasq-dns-57c77444c9-2658j" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.173393 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3619d791-2346-42f8-8d22-65a669474273-ovsdbserver-sb\") pod \"dnsmasq-dns-57c77444c9-2658j\" (UID: \"3619d791-2346-42f8-8d22-65a669474273\") " pod="openstack/dnsmasq-dns-57c77444c9-2658j" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.173786 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3619d791-2346-42f8-8d22-65a669474273-dns-svc\") pod \"dnsmasq-dns-57c77444c9-2658j\" (UID: \"3619d791-2346-42f8-8d22-65a669474273\") " pod="openstack/dnsmasq-dns-57c77444c9-2658j" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.174490 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3619d791-2346-42f8-8d22-65a669474273-ovsdbserver-nb\") pod \"dnsmasq-dns-57c77444c9-2658j\" (UID: \"3619d791-2346-42f8-8d22-65a669474273\") " pod="openstack/dnsmasq-dns-57c77444c9-2658j" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.175882 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-4hlv6"] Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.208873 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxrpg\" (UniqueName: \"kubernetes.io/projected/3619d791-2346-42f8-8d22-65a669474273-kube-api-access-fxrpg\") pod \"dnsmasq-dns-57c77444c9-2658j\" (UID: \"3619d791-2346-42f8-8d22-65a669474273\") " pod="openstack/dnsmasq-dns-57c77444c9-2658j" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.241764 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-fd65c"] Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.243812 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4nb2f" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.247322 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-fd65c" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.252701 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-cbh44" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.252978 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.253112 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.253606 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-fd65c"] Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.274050 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6-scripts\") pod \"cinder-db-sync-4hlv6\" (UID: \"2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6\") " pod="openstack/cinder-db-sync-4hlv6" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.274093 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/873188a0-9dbb-4c95-b39e-cd503e07e59f-scripts\") pod \"ceilometer-0\" (UID: \"873188a0-9dbb-4c95-b39e-cd503e07e59f\") " pod="openstack/ceilometer-0" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.274122 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/873188a0-9dbb-4c95-b39e-cd503e07e59f-log-httpd\") pod \"ceilometer-0\" (UID: \"873188a0-9dbb-4c95-b39e-cd503e07e59f\") " pod="openstack/ceilometer-0" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.274138 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9ktk\" (UniqueName: \"kubernetes.io/projected/873188a0-9dbb-4c95-b39e-cd503e07e59f-kube-api-access-b9ktk\") pod \"ceilometer-0\" (UID: \"873188a0-9dbb-4c95-b39e-cd503e07e59f\") " pod="openstack/ceilometer-0" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.274169 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ctpf\" (UniqueName: \"kubernetes.io/projected/2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6-kube-api-access-4ctpf\") pod \"cinder-db-sync-4hlv6\" (UID: \"2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6\") " pod="openstack/cinder-db-sync-4hlv6" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.274199 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c51097b-ac52-49ee-95df-440e1567be8b-combined-ca-bundle\") pod \"neutron-db-sync-fd65c\" (UID: \"2c51097b-ac52-49ee-95df-440e1567be8b\") " pod="openstack/neutron-db-sync-fd65c" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.274215 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/873188a0-9dbb-4c95-b39e-cd503e07e59f-config-data\") pod \"ceilometer-0\" (UID: \"873188a0-9dbb-4c95-b39e-cd503e07e59f\") " pod="openstack/ceilometer-0" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.274244 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6-db-sync-config-data\") pod \"cinder-db-sync-4hlv6\" (UID: \"2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6\") " pod="openstack/cinder-db-sync-4hlv6" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.274261 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2c51097b-ac52-49ee-95df-440e1567be8b-config\") pod \"neutron-db-sync-fd65c\" (UID: \"2c51097b-ac52-49ee-95df-440e1567be8b\") " pod="openstack/neutron-db-sync-fd65c" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.274303 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/873188a0-9dbb-4c95-b39e-cd503e07e59f-run-httpd\") pod \"ceilometer-0\" (UID: \"873188a0-9dbb-4c95-b39e-cd503e07e59f\") " pod="openstack/ceilometer-0" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.274323 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6-config-data\") pod \"cinder-db-sync-4hlv6\" (UID: \"2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6\") " pod="openstack/cinder-db-sync-4hlv6" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.274345 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6-etc-machine-id\") pod \"cinder-db-sync-4hlv6\" (UID: \"2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6\") " pod="openstack/cinder-db-sync-4hlv6" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.274364 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vrj4\" (UniqueName: \"kubernetes.io/projected/2c51097b-ac52-49ee-95df-440e1567be8b-kube-api-access-2vrj4\") pod \"neutron-db-sync-fd65c\" (UID: \"2c51097b-ac52-49ee-95df-440e1567be8b\") " pod="openstack/neutron-db-sync-fd65c" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.274384 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/873188a0-9dbb-4c95-b39e-cd503e07e59f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"873188a0-9dbb-4c95-b39e-cd503e07e59f\") " pod="openstack/ceilometer-0" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.274400 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/873188a0-9dbb-4c95-b39e-cd503e07e59f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"873188a0-9dbb-4c95-b39e-cd503e07e59f\") " pod="openstack/ceilometer-0" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.274421 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6-combined-ca-bundle\") pod \"cinder-db-sync-4hlv6\" (UID: \"2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6\") " pod="openstack/cinder-db-sync-4hlv6" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.278025 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6-combined-ca-bundle\") pod \"cinder-db-sync-4hlv6\" (UID: \"2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6\") " pod="openstack/cinder-db-sync-4hlv6" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.278415 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/873188a0-9dbb-4c95-b39e-cd503e07e59f-run-httpd\") pod \"ceilometer-0\" (UID: \"873188a0-9dbb-4c95-b39e-cd503e07e59f\") " pod="openstack/ceilometer-0" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.281410 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6-db-sync-config-data\") pod \"cinder-db-sync-4hlv6\" (UID: \"2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6\") " pod="openstack/cinder-db-sync-4hlv6" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.283078 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6-etc-machine-id\") pod \"cinder-db-sync-4hlv6\" (UID: \"2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6\") " pod="openstack/cinder-db-sync-4hlv6" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.284346 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6-scripts\") pod \"cinder-db-sync-4hlv6\" (UID: \"2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6\") " pod="openstack/cinder-db-sync-4hlv6" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.292642 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/873188a0-9dbb-4c95-b39e-cd503e07e59f-log-httpd\") pod \"ceilometer-0\" (UID: \"873188a0-9dbb-4c95-b39e-cd503e07e59f\") " pod="openstack/ceilometer-0" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.293427 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/873188a0-9dbb-4c95-b39e-cd503e07e59f-scripts\") pod \"ceilometer-0\" (UID: \"873188a0-9dbb-4c95-b39e-cd503e07e59f\") " pod="openstack/ceilometer-0" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.293647 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6-config-data\") pod \"cinder-db-sync-4hlv6\" (UID: \"2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6\") " pod="openstack/cinder-db-sync-4hlv6" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.295236 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/873188a0-9dbb-4c95-b39e-cd503e07e59f-config-data\") pod \"ceilometer-0\" (UID: \"873188a0-9dbb-4c95-b39e-cd503e07e59f\") " pod="openstack/ceilometer-0" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.295418 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/873188a0-9dbb-4c95-b39e-cd503e07e59f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"873188a0-9dbb-4c95-b39e-cd503e07e59f\") " pod="openstack/ceilometer-0" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.297971 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/873188a0-9dbb-4c95-b39e-cd503e07e59f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"873188a0-9dbb-4c95-b39e-cd503e07e59f\") " pod="openstack/ceilometer-0" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.312559 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9ktk\" (UniqueName: \"kubernetes.io/projected/873188a0-9dbb-4c95-b39e-cd503e07e59f-kube-api-access-b9ktk\") pod \"ceilometer-0\" (UID: \"873188a0-9dbb-4c95-b39e-cd503e07e59f\") " pod="openstack/ceilometer-0" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.317286 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c77444c9-2658j" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.340527 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ctpf\" (UniqueName: \"kubernetes.io/projected/2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6-kube-api-access-4ctpf\") pod \"cinder-db-sync-4hlv6\" (UID: \"2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6\") " pod="openstack/cinder-db-sync-4hlv6" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.342361 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-zsd2g"] Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.343915 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-zsd2g" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.347465 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.347704 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-v6p95" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.361081 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-zsd2g"] Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.378548 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c77444c9-2658j"] Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.385652 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vrj4\" (UniqueName: \"kubernetes.io/projected/2c51097b-ac52-49ee-95df-440e1567be8b-kube-api-access-2vrj4\") pod \"neutron-db-sync-fd65c\" (UID: \"2c51097b-ac52-49ee-95df-440e1567be8b\") " pod="openstack/neutron-db-sync-fd65c" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.385712 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7acaca59-7888-4a05-8eb3-f925f2f8d44b-combined-ca-bundle\") pod \"barbican-db-sync-zsd2g\" (UID: \"7acaca59-7888-4a05-8eb3-f925f2f8d44b\") " pod="openstack/barbican-db-sync-zsd2g" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.385738 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7acaca59-7888-4a05-8eb3-f925f2f8d44b-db-sync-config-data\") pod \"barbican-db-sync-zsd2g\" (UID: \"7acaca59-7888-4a05-8eb3-f925f2f8d44b\") " pod="openstack/barbican-db-sync-zsd2g" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.385795 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c51097b-ac52-49ee-95df-440e1567be8b-combined-ca-bundle\") pod \"neutron-db-sync-fd65c\" (UID: \"2c51097b-ac52-49ee-95df-440e1567be8b\") " pod="openstack/neutron-db-sync-fd65c" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.385836 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2c51097b-ac52-49ee-95df-440e1567be8b-config\") pod \"neutron-db-sync-fd65c\" (UID: \"2c51097b-ac52-49ee-95df-440e1567be8b\") " pod="openstack/neutron-db-sync-fd65c" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.386036 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf9tq\" (UniqueName: \"kubernetes.io/projected/7acaca59-7888-4a05-8eb3-f925f2f8d44b-kube-api-access-nf9tq\") pod \"barbican-db-sync-zsd2g\" (UID: \"7acaca59-7888-4a05-8eb3-f925f2f8d44b\") " pod="openstack/barbican-db-sync-zsd2g" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.391696 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-n8z94"] Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.393987 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c51097b-ac52-49ee-95df-440e1567be8b-combined-ca-bundle\") pod \"neutron-db-sync-fd65c\" (UID: \"2c51097b-ac52-49ee-95df-440e1567be8b\") " pod="openstack/neutron-db-sync-fd65c" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.394122 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-n8z94" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.399200 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-n8z94"] Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.399975 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.400214 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-84b9l" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.400472 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.411704 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2c51097b-ac52-49ee-95df-440e1567be8b-config\") pod \"neutron-db-sync-fd65c\" (UID: \"2c51097b-ac52-49ee-95df-440e1567be8b\") " pod="openstack/neutron-db-sync-fd65c" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.417822 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d95b78fc9-pqwbl"] Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.428259 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vrj4\" (UniqueName: \"kubernetes.io/projected/2c51097b-ac52-49ee-95df-440e1567be8b-kube-api-access-2vrj4\") pod \"neutron-db-sync-fd65c\" (UID: \"2c51097b-ac52-49ee-95df-440e1567be8b\") " pod="openstack/neutron-db-sync-fd65c" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.434986 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d95b78fc9-pqwbl" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.446817 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d95b78fc9-pqwbl"] Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.449516 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.482742 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4hlv6" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.491655 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a57fd65d-825d-4fd1-a5e6-ab244093e1b8-config\") pod \"dnsmasq-dns-7d95b78fc9-pqwbl\" (UID: \"a57fd65d-825d-4fd1-a5e6-ab244093e1b8\") " pod="openstack/dnsmasq-dns-7d95b78fc9-pqwbl" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.491719 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a57fd65d-825d-4fd1-a5e6-ab244093e1b8-ovsdbserver-nb\") pod \"dnsmasq-dns-7d95b78fc9-pqwbl\" (UID: \"a57fd65d-825d-4fd1-a5e6-ab244093e1b8\") " pod="openstack/dnsmasq-dns-7d95b78fc9-pqwbl" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.491768 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f285358e-df22-44d4-b3b4-5a2dc69399c6-config-data\") pod \"placement-db-sync-n8z94\" (UID: \"f285358e-df22-44d4-b3b4-5a2dc69399c6\") " pod="openstack/placement-db-sync-n8z94" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.491826 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w8fw\" (UniqueName: \"kubernetes.io/projected/f285358e-df22-44d4-b3b4-5a2dc69399c6-kube-api-access-2w8fw\") pod \"placement-db-sync-n8z94\" (UID: \"f285358e-df22-44d4-b3b4-5a2dc69399c6\") " pod="openstack/placement-db-sync-n8z94" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.491867 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f285358e-df22-44d4-b3b4-5a2dc69399c6-combined-ca-bundle\") pod \"placement-db-sync-n8z94\" (UID: \"f285358e-df22-44d4-b3b4-5a2dc69399c6\") " pod="openstack/placement-db-sync-n8z94" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.491910 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg6pb\" (UniqueName: \"kubernetes.io/projected/a57fd65d-825d-4fd1-a5e6-ab244093e1b8-kube-api-access-cg6pb\") pod \"dnsmasq-dns-7d95b78fc9-pqwbl\" (UID: \"a57fd65d-825d-4fd1-a5e6-ab244093e1b8\") " pod="openstack/dnsmasq-dns-7d95b78fc9-pqwbl" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.491986 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a57fd65d-825d-4fd1-a5e6-ab244093e1b8-dns-svc\") pod \"dnsmasq-dns-7d95b78fc9-pqwbl\" (UID: \"a57fd65d-825d-4fd1-a5e6-ab244093e1b8\") " pod="openstack/dnsmasq-dns-7d95b78fc9-pqwbl" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.492083 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a57fd65d-825d-4fd1-a5e6-ab244093e1b8-dns-swift-storage-0\") pod \"dnsmasq-dns-7d95b78fc9-pqwbl\" (UID: \"a57fd65d-825d-4fd1-a5e6-ab244093e1b8\") " pod="openstack/dnsmasq-dns-7d95b78fc9-pqwbl" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.492104 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f285358e-df22-44d4-b3b4-5a2dc69399c6-logs\") pod \"placement-db-sync-n8z94\" (UID: \"f285358e-df22-44d4-b3b4-5a2dc69399c6\") " pod="openstack/placement-db-sync-n8z94" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.492165 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a57fd65d-825d-4fd1-a5e6-ab244093e1b8-ovsdbserver-sb\") pod \"dnsmasq-dns-7d95b78fc9-pqwbl\" (UID: \"a57fd65d-825d-4fd1-a5e6-ab244093e1b8\") " pod="openstack/dnsmasq-dns-7d95b78fc9-pqwbl" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.492362 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf9tq\" (UniqueName: \"kubernetes.io/projected/7acaca59-7888-4a05-8eb3-f925f2f8d44b-kube-api-access-nf9tq\") pod \"barbican-db-sync-zsd2g\" (UID: \"7acaca59-7888-4a05-8eb3-f925f2f8d44b\") " pod="openstack/barbican-db-sync-zsd2g" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.492760 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7acaca59-7888-4a05-8eb3-f925f2f8d44b-combined-ca-bundle\") pod \"barbican-db-sync-zsd2g\" (UID: \"7acaca59-7888-4a05-8eb3-f925f2f8d44b\") " pod="openstack/barbican-db-sync-zsd2g" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.494272 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f285358e-df22-44d4-b3b4-5a2dc69399c6-scripts\") pod \"placement-db-sync-n8z94\" (UID: \"f285358e-df22-44d4-b3b4-5a2dc69399c6\") " pod="openstack/placement-db-sync-n8z94" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.494321 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7acaca59-7888-4a05-8eb3-f925f2f8d44b-db-sync-config-data\") pod \"barbican-db-sync-zsd2g\" (UID: \"7acaca59-7888-4a05-8eb3-f925f2f8d44b\") " pod="openstack/barbican-db-sync-zsd2g" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.498309 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7acaca59-7888-4a05-8eb3-f925f2f8d44b-db-sync-config-data\") pod \"barbican-db-sync-zsd2g\" (UID: \"7acaca59-7888-4a05-8eb3-f925f2f8d44b\") " pod="openstack/barbican-db-sync-zsd2g" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.513265 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7acaca59-7888-4a05-8eb3-f925f2f8d44b-combined-ca-bundle\") pod \"barbican-db-sync-zsd2g\" (UID: \"7acaca59-7888-4a05-8eb3-f925f2f8d44b\") " pod="openstack/barbican-db-sync-zsd2g" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.533187 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf9tq\" (UniqueName: \"kubernetes.io/projected/7acaca59-7888-4a05-8eb3-f925f2f8d44b-kube-api-access-nf9tq\") pod \"barbican-db-sync-zsd2g\" (UID: \"7acaca59-7888-4a05-8eb3-f925f2f8d44b\") " pod="openstack/barbican-db-sync-zsd2g" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.564436 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-fd65c" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.595613 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f285358e-df22-44d4-b3b4-5a2dc69399c6-scripts\") pod \"placement-db-sync-n8z94\" (UID: \"f285358e-df22-44d4-b3b4-5a2dc69399c6\") " pod="openstack/placement-db-sync-n8z94" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.595670 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a57fd65d-825d-4fd1-a5e6-ab244093e1b8-config\") pod \"dnsmasq-dns-7d95b78fc9-pqwbl\" (UID: \"a57fd65d-825d-4fd1-a5e6-ab244093e1b8\") " pod="openstack/dnsmasq-dns-7d95b78fc9-pqwbl" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.595694 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a57fd65d-825d-4fd1-a5e6-ab244093e1b8-ovsdbserver-nb\") pod \"dnsmasq-dns-7d95b78fc9-pqwbl\" (UID: \"a57fd65d-825d-4fd1-a5e6-ab244093e1b8\") " pod="openstack/dnsmasq-dns-7d95b78fc9-pqwbl" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.595718 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f285358e-df22-44d4-b3b4-5a2dc69399c6-config-data\") pod \"placement-db-sync-n8z94\" (UID: \"f285358e-df22-44d4-b3b4-5a2dc69399c6\") " pod="openstack/placement-db-sync-n8z94" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.595742 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w8fw\" (UniqueName: \"kubernetes.io/projected/f285358e-df22-44d4-b3b4-5a2dc69399c6-kube-api-access-2w8fw\") pod \"placement-db-sync-n8z94\" (UID: \"f285358e-df22-44d4-b3b4-5a2dc69399c6\") " pod="openstack/placement-db-sync-n8z94" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.595763 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f285358e-df22-44d4-b3b4-5a2dc69399c6-combined-ca-bundle\") pod \"placement-db-sync-n8z94\" (UID: \"f285358e-df22-44d4-b3b4-5a2dc69399c6\") " pod="openstack/placement-db-sync-n8z94" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.595784 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cg6pb\" (UniqueName: \"kubernetes.io/projected/a57fd65d-825d-4fd1-a5e6-ab244093e1b8-kube-api-access-cg6pb\") pod \"dnsmasq-dns-7d95b78fc9-pqwbl\" (UID: \"a57fd65d-825d-4fd1-a5e6-ab244093e1b8\") " pod="openstack/dnsmasq-dns-7d95b78fc9-pqwbl" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.595810 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a57fd65d-825d-4fd1-a5e6-ab244093e1b8-dns-svc\") pod \"dnsmasq-dns-7d95b78fc9-pqwbl\" (UID: \"a57fd65d-825d-4fd1-a5e6-ab244093e1b8\") " pod="openstack/dnsmasq-dns-7d95b78fc9-pqwbl" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.595832 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a57fd65d-825d-4fd1-a5e6-ab244093e1b8-dns-swift-storage-0\") pod \"dnsmasq-dns-7d95b78fc9-pqwbl\" (UID: \"a57fd65d-825d-4fd1-a5e6-ab244093e1b8\") " pod="openstack/dnsmasq-dns-7d95b78fc9-pqwbl" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.595849 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f285358e-df22-44d4-b3b4-5a2dc69399c6-logs\") pod \"placement-db-sync-n8z94\" (UID: \"f285358e-df22-44d4-b3b4-5a2dc69399c6\") " pod="openstack/placement-db-sync-n8z94" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.595866 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a57fd65d-825d-4fd1-a5e6-ab244093e1b8-ovsdbserver-sb\") pod \"dnsmasq-dns-7d95b78fc9-pqwbl\" (UID: \"a57fd65d-825d-4fd1-a5e6-ab244093e1b8\") " pod="openstack/dnsmasq-dns-7d95b78fc9-pqwbl" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.597033 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a57fd65d-825d-4fd1-a5e6-ab244093e1b8-ovsdbserver-sb\") pod \"dnsmasq-dns-7d95b78fc9-pqwbl\" (UID: \"a57fd65d-825d-4fd1-a5e6-ab244093e1b8\") " pod="openstack/dnsmasq-dns-7d95b78fc9-pqwbl" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.597270 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a57fd65d-825d-4fd1-a5e6-ab244093e1b8-config\") pod \"dnsmasq-dns-7d95b78fc9-pqwbl\" (UID: \"a57fd65d-825d-4fd1-a5e6-ab244093e1b8\") " pod="openstack/dnsmasq-dns-7d95b78fc9-pqwbl" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.597618 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a57fd65d-825d-4fd1-a5e6-ab244093e1b8-dns-swift-storage-0\") pod \"dnsmasq-dns-7d95b78fc9-pqwbl\" (UID: \"a57fd65d-825d-4fd1-a5e6-ab244093e1b8\") " pod="openstack/dnsmasq-dns-7d95b78fc9-pqwbl" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.597874 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f285358e-df22-44d4-b3b4-5a2dc69399c6-logs\") pod \"placement-db-sync-n8z94\" (UID: \"f285358e-df22-44d4-b3b4-5a2dc69399c6\") " pod="openstack/placement-db-sync-n8z94" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.599713 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a57fd65d-825d-4fd1-a5e6-ab244093e1b8-ovsdbserver-nb\") pod \"dnsmasq-dns-7d95b78fc9-pqwbl\" (UID: \"a57fd65d-825d-4fd1-a5e6-ab244093e1b8\") " pod="openstack/dnsmasq-dns-7d95b78fc9-pqwbl" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.602625 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f285358e-df22-44d4-b3b4-5a2dc69399c6-scripts\") pod \"placement-db-sync-n8z94\" (UID: \"f285358e-df22-44d4-b3b4-5a2dc69399c6\") " pod="openstack/placement-db-sync-n8z94" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.605307 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f285358e-df22-44d4-b3b4-5a2dc69399c6-config-data\") pod \"placement-db-sync-n8z94\" (UID: \"f285358e-df22-44d4-b3b4-5a2dc69399c6\") " pod="openstack/placement-db-sync-n8z94" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.606839 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f285358e-df22-44d4-b3b4-5a2dc69399c6-combined-ca-bundle\") pod \"placement-db-sync-n8z94\" (UID: \"f285358e-df22-44d4-b3b4-5a2dc69399c6\") " pod="openstack/placement-db-sync-n8z94" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.608356 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a57fd65d-825d-4fd1-a5e6-ab244093e1b8-dns-svc\") pod \"dnsmasq-dns-7d95b78fc9-pqwbl\" (UID: \"a57fd65d-825d-4fd1-a5e6-ab244093e1b8\") " pod="openstack/dnsmasq-dns-7d95b78fc9-pqwbl" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.610611 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w8fw\" (UniqueName: \"kubernetes.io/projected/f285358e-df22-44d4-b3b4-5a2dc69399c6-kube-api-access-2w8fw\") pod \"placement-db-sync-n8z94\" (UID: \"f285358e-df22-44d4-b3b4-5a2dc69399c6\") " pod="openstack/placement-db-sync-n8z94" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.614203 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg6pb\" (UniqueName: \"kubernetes.io/projected/a57fd65d-825d-4fd1-a5e6-ab244093e1b8-kube-api-access-cg6pb\") pod \"dnsmasq-dns-7d95b78fc9-pqwbl\" (UID: \"a57fd65d-825d-4fd1-a5e6-ab244093e1b8\") " pod="openstack/dnsmasq-dns-7d95b78fc9-pqwbl" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.679403 4747 generic.go:334] "Generic (PLEG): container finished" podID="8ad61e1e-65fd-40b7-be7f-3e2dc679aee8" containerID="254e5a55b075192d212d437c5a8c992cfe8b47bcddc1542d6080110031c4c6d2" exitCode=0 Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.679461 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6db75979dc-kxl2c" event={"ID":"8ad61e1e-65fd-40b7-be7f-3e2dc679aee8","Type":"ContainerDied","Data":"254e5a55b075192d212d437c5a8c992cfe8b47bcddc1542d6080110031c4c6d2"} Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.694297 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-zsd2g" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.732897 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-n8z94" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.759284 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d95b78fc9-pqwbl" Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.801773 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4nb2f"] Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.923141 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c77444c9-2658j"] Dec 15 05:52:05 crc kubenswrapper[4747]: I1215 05:52:05.988891 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.033552 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.035528 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.038090 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.040248 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-q89ks" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.040460 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.040615 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.043766 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-4hlv6"] Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.051889 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.087471 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-fd65c"] Dec 15 05:52:06 crc kubenswrapper[4747]: W1215 05:52:06.097318 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c51097b_ac52_49ee_95df_440e1567be8b.slice/crio-4c070257d926369a55348902966ba606b5c1df33026a71f954bab9e970d44d26 WatchSource:0}: Error finding container 4c070257d926369a55348902966ba606b5c1df33026a71f954bab9e970d44d26: Status 404 returned error can't find the container with id 4c070257d926369a55348902966ba606b5c1df33026a71f954bab9e970d44d26 Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.122869 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.125511 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.127905 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.128139 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.130678 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.208858 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crm4f\" (UniqueName: \"kubernetes.io/projected/de019252-e11f-44f8-9f25-72b90aaa0b97-kube-api-access-crm4f\") pod \"glance-default-external-api-0\" (UID: \"de019252-e11f-44f8-9f25-72b90aaa0b97\") " pod="openstack/glance-default-external-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.208911 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de019252-e11f-44f8-9f25-72b90aaa0b97-config-data\") pod \"glance-default-external-api-0\" (UID: \"de019252-e11f-44f8-9f25-72b90aaa0b97\") " pod="openstack/glance-default-external-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.208967 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de019252-e11f-44f8-9f25-72b90aaa0b97-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"de019252-e11f-44f8-9f25-72b90aaa0b97\") " pod="openstack/glance-default-external-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.209019 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/de019252-e11f-44f8-9f25-72b90aaa0b97-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"de019252-e11f-44f8-9f25-72b90aaa0b97\") " pod="openstack/glance-default-external-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.209043 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/de019252-e11f-44f8-9f25-72b90aaa0b97-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"de019252-e11f-44f8-9f25-72b90aaa0b97\") " pod="openstack/glance-default-external-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.209069 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de019252-e11f-44f8-9f25-72b90aaa0b97-logs\") pod \"glance-default-external-api-0\" (UID: \"de019252-e11f-44f8-9f25-72b90aaa0b97\") " pod="openstack/glance-default-external-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.209103 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"de019252-e11f-44f8-9f25-72b90aaa0b97\") " pod="openstack/glance-default-external-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.209134 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de019252-e11f-44f8-9f25-72b90aaa0b97-scripts\") pod \"glance-default-external-api-0\" (UID: \"de019252-e11f-44f8-9f25-72b90aaa0b97\") " pod="openstack/glance-default-external-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.311833 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de019252-e11f-44f8-9f25-72b90aaa0b97-logs\") pod \"glance-default-external-api-0\" (UID: \"de019252-e11f-44f8-9f25-72b90aaa0b97\") " pod="openstack/glance-default-external-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.311964 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"e1f189d7-083f-4f7c-b004-bbd6be9ced19\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.312055 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1f189d7-083f-4f7c-b004-bbd6be9ced19-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e1f189d7-083f-4f7c-b004-bbd6be9ced19\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.312109 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"de019252-e11f-44f8-9f25-72b90aaa0b97\") " pod="openstack/glance-default-external-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.312264 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e1f189d7-083f-4f7c-b004-bbd6be9ced19-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e1f189d7-083f-4f7c-b004-bbd6be9ced19\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.312338 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de019252-e11f-44f8-9f25-72b90aaa0b97-scripts\") pod \"glance-default-external-api-0\" (UID: \"de019252-e11f-44f8-9f25-72b90aaa0b97\") " pod="openstack/glance-default-external-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.312384 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r54d\" (UniqueName: \"kubernetes.io/projected/e1f189d7-083f-4f7c-b004-bbd6be9ced19-kube-api-access-8r54d\") pod \"glance-default-internal-api-0\" (UID: \"e1f189d7-083f-4f7c-b004-bbd6be9ced19\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.312458 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1f189d7-083f-4f7c-b004-bbd6be9ced19-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e1f189d7-083f-4f7c-b004-bbd6be9ced19\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.312493 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crm4f\" (UniqueName: \"kubernetes.io/projected/de019252-e11f-44f8-9f25-72b90aaa0b97-kube-api-access-crm4f\") pod \"glance-default-external-api-0\" (UID: \"de019252-e11f-44f8-9f25-72b90aaa0b97\") " pod="openstack/glance-default-external-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.312530 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1f189d7-083f-4f7c-b004-bbd6be9ced19-logs\") pod \"glance-default-internal-api-0\" (UID: \"e1f189d7-083f-4f7c-b004-bbd6be9ced19\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.312620 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1f189d7-083f-4f7c-b004-bbd6be9ced19-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e1f189d7-083f-4f7c-b004-bbd6be9ced19\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.312650 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de019252-e11f-44f8-9f25-72b90aaa0b97-config-data\") pod \"glance-default-external-api-0\" (UID: \"de019252-e11f-44f8-9f25-72b90aaa0b97\") " pod="openstack/glance-default-external-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.312742 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de019252-e11f-44f8-9f25-72b90aaa0b97-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"de019252-e11f-44f8-9f25-72b90aaa0b97\") " pod="openstack/glance-default-external-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.312820 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1f189d7-083f-4f7c-b004-bbd6be9ced19-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e1f189d7-083f-4f7c-b004-bbd6be9ced19\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.312907 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/de019252-e11f-44f8-9f25-72b90aaa0b97-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"de019252-e11f-44f8-9f25-72b90aaa0b97\") " pod="openstack/glance-default-external-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.312972 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/de019252-e11f-44f8-9f25-72b90aaa0b97-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"de019252-e11f-44f8-9f25-72b90aaa0b97\") " pod="openstack/glance-default-external-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.313567 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de019252-e11f-44f8-9f25-72b90aaa0b97-logs\") pod \"glance-default-external-api-0\" (UID: \"de019252-e11f-44f8-9f25-72b90aaa0b97\") " pod="openstack/glance-default-external-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.313605 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/de019252-e11f-44f8-9f25-72b90aaa0b97-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"de019252-e11f-44f8-9f25-72b90aaa0b97\") " pod="openstack/glance-default-external-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.314190 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-zsd2g"] Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.314223 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"de019252-e11f-44f8-9f25-72b90aaa0b97\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.318696 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de019252-e11f-44f8-9f25-72b90aaa0b97-scripts\") pod \"glance-default-external-api-0\" (UID: \"de019252-e11f-44f8-9f25-72b90aaa0b97\") " pod="openstack/glance-default-external-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.319393 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de019252-e11f-44f8-9f25-72b90aaa0b97-config-data\") pod \"glance-default-external-api-0\" (UID: \"de019252-e11f-44f8-9f25-72b90aaa0b97\") " pod="openstack/glance-default-external-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.321895 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/de019252-e11f-44f8-9f25-72b90aaa0b97-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"de019252-e11f-44f8-9f25-72b90aaa0b97\") " pod="openstack/glance-default-external-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.325363 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de019252-e11f-44f8-9f25-72b90aaa0b97-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"de019252-e11f-44f8-9f25-72b90aaa0b97\") " pod="openstack/glance-default-external-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: W1215 05:52:06.329166 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf285358e_df22_44d4_b3b4_5a2dc69399c6.slice/crio-c9af89d1a0d9c36a9bfb8d842436adce53b02ec8842456c2f480a0f03d5ba0db WatchSource:0}: Error finding container c9af89d1a0d9c36a9bfb8d842436adce53b02ec8842456c2f480a0f03d5ba0db: Status 404 returned error can't find the container with id c9af89d1a0d9c36a9bfb8d842436adce53b02ec8842456c2f480a0f03d5ba0db Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.329423 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-n8z94"] Dec 15 05:52:06 crc kubenswrapper[4747]: W1215 05:52:06.334476 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda57fd65d_825d_4fd1_a5e6_ab244093e1b8.slice/crio-f86724a4d7de012ad911f0cb75c54e45a797271517def5c52103c27d8d0eab68 WatchSource:0}: Error finding container f86724a4d7de012ad911f0cb75c54e45a797271517def5c52103c27d8d0eab68: Status 404 returned error can't find the container with id f86724a4d7de012ad911f0cb75c54e45a797271517def5c52103c27d8d0eab68 Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.336434 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crm4f\" (UniqueName: \"kubernetes.io/projected/de019252-e11f-44f8-9f25-72b90aaa0b97-kube-api-access-crm4f\") pod \"glance-default-external-api-0\" (UID: \"de019252-e11f-44f8-9f25-72b90aaa0b97\") " pod="openstack/glance-default-external-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.339424 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d95b78fc9-pqwbl"] Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.352657 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"de019252-e11f-44f8-9f25-72b90aaa0b97\") " pod="openstack/glance-default-external-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.414292 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e1f189d7-083f-4f7c-b004-bbd6be9ced19-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e1f189d7-083f-4f7c-b004-bbd6be9ced19\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.414363 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r54d\" (UniqueName: \"kubernetes.io/projected/e1f189d7-083f-4f7c-b004-bbd6be9ced19-kube-api-access-8r54d\") pod \"glance-default-internal-api-0\" (UID: \"e1f189d7-083f-4f7c-b004-bbd6be9ced19\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.414390 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1f189d7-083f-4f7c-b004-bbd6be9ced19-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e1f189d7-083f-4f7c-b004-bbd6be9ced19\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.414421 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1f189d7-083f-4f7c-b004-bbd6be9ced19-logs\") pod \"glance-default-internal-api-0\" (UID: \"e1f189d7-083f-4f7c-b004-bbd6be9ced19\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.414460 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1f189d7-083f-4f7c-b004-bbd6be9ced19-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e1f189d7-083f-4f7c-b004-bbd6be9ced19\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.414538 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1f189d7-083f-4f7c-b004-bbd6be9ced19-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e1f189d7-083f-4f7c-b004-bbd6be9ced19\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.414651 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"e1f189d7-083f-4f7c-b004-bbd6be9ced19\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.414690 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1f189d7-083f-4f7c-b004-bbd6be9ced19-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e1f189d7-083f-4f7c-b004-bbd6be9ced19\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.415614 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1f189d7-083f-4f7c-b004-bbd6be9ced19-logs\") pod \"glance-default-internal-api-0\" (UID: \"e1f189d7-083f-4f7c-b004-bbd6be9ced19\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.415848 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e1f189d7-083f-4f7c-b004-bbd6be9ced19-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e1f189d7-083f-4f7c-b004-bbd6be9ced19\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.416516 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"e1f189d7-083f-4f7c-b004-bbd6be9ced19\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.419958 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.419974 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1f189d7-083f-4f7c-b004-bbd6be9ced19-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e1f189d7-083f-4f7c-b004-bbd6be9ced19\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.421644 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1f189d7-083f-4f7c-b004-bbd6be9ced19-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e1f189d7-083f-4f7c-b004-bbd6be9ced19\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.421702 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1f189d7-083f-4f7c-b004-bbd6be9ced19-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e1f189d7-083f-4f7c-b004-bbd6be9ced19\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.422412 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1f189d7-083f-4f7c-b004-bbd6be9ced19-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e1f189d7-083f-4f7c-b004-bbd6be9ced19\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.427725 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6db75979dc-kxl2c" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.455709 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r54d\" (UniqueName: \"kubernetes.io/projected/e1f189d7-083f-4f7c-b004-bbd6be9ced19-kube-api-access-8r54d\") pod \"glance-default-internal-api-0\" (UID: \"e1f189d7-083f-4f7c-b004-bbd6be9ced19\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.465527 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"e1f189d7-083f-4f7c-b004-bbd6be9ced19\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.517265 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ad61e1e-65fd-40b7-be7f-3e2dc679aee8-config\") pod \"8ad61e1e-65fd-40b7-be7f-3e2dc679aee8\" (UID: \"8ad61e1e-65fd-40b7-be7f-3e2dc679aee8\") " Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.517319 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ad61e1e-65fd-40b7-be7f-3e2dc679aee8-ovsdbserver-nb\") pod \"8ad61e1e-65fd-40b7-be7f-3e2dc679aee8\" (UID: \"8ad61e1e-65fd-40b7-be7f-3e2dc679aee8\") " Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.517578 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ad61e1e-65fd-40b7-be7f-3e2dc679aee8-dns-swift-storage-0\") pod \"8ad61e1e-65fd-40b7-be7f-3e2dc679aee8\" (UID: \"8ad61e1e-65fd-40b7-be7f-3e2dc679aee8\") " Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.517624 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ad61e1e-65fd-40b7-be7f-3e2dc679aee8-dns-svc\") pod \"8ad61e1e-65fd-40b7-be7f-3e2dc679aee8\" (UID: \"8ad61e1e-65fd-40b7-be7f-3e2dc679aee8\") " Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.517653 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsfbt\" (UniqueName: \"kubernetes.io/projected/8ad61e1e-65fd-40b7-be7f-3e2dc679aee8-kube-api-access-bsfbt\") pod \"8ad61e1e-65fd-40b7-be7f-3e2dc679aee8\" (UID: \"8ad61e1e-65fd-40b7-be7f-3e2dc679aee8\") " Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.517684 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ad61e1e-65fd-40b7-be7f-3e2dc679aee8-ovsdbserver-sb\") pod \"8ad61e1e-65fd-40b7-be7f-3e2dc679aee8\" (UID: \"8ad61e1e-65fd-40b7-be7f-3e2dc679aee8\") " Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.522723 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ad61e1e-65fd-40b7-be7f-3e2dc679aee8-kube-api-access-bsfbt" (OuterVolumeSpecName: "kube-api-access-bsfbt") pod "8ad61e1e-65fd-40b7-be7f-3e2dc679aee8" (UID: "8ad61e1e-65fd-40b7-be7f-3e2dc679aee8"). InnerVolumeSpecName "kube-api-access-bsfbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.563762 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ad61e1e-65fd-40b7-be7f-3e2dc679aee8-config" (OuterVolumeSpecName: "config") pod "8ad61e1e-65fd-40b7-be7f-3e2dc679aee8" (UID: "8ad61e1e-65fd-40b7-be7f-3e2dc679aee8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.569478 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ad61e1e-65fd-40b7-be7f-3e2dc679aee8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8ad61e1e-65fd-40b7-be7f-3e2dc679aee8" (UID: "8ad61e1e-65fd-40b7-be7f-3e2dc679aee8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.583663 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ad61e1e-65fd-40b7-be7f-3e2dc679aee8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8ad61e1e-65fd-40b7-be7f-3e2dc679aee8" (UID: "8ad61e1e-65fd-40b7-be7f-3e2dc679aee8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.591789 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ad61e1e-65fd-40b7-be7f-3e2dc679aee8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8ad61e1e-65fd-40b7-be7f-3e2dc679aee8" (UID: "8ad61e1e-65fd-40b7-be7f-3e2dc679aee8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.594721 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ad61e1e-65fd-40b7-be7f-3e2dc679aee8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8ad61e1e-65fd-40b7-be7f-3e2dc679aee8" (UID: "8ad61e1e-65fd-40b7-be7f-3e2dc679aee8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.620225 4747 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ad61e1e-65fd-40b7-be7f-3e2dc679aee8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.620252 4747 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ad61e1e-65fd-40b7-be7f-3e2dc679aee8-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.620263 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsfbt\" (UniqueName: \"kubernetes.io/projected/8ad61e1e-65fd-40b7-be7f-3e2dc679aee8-kube-api-access-bsfbt\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.620276 4747 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ad61e1e-65fd-40b7-be7f-3e2dc679aee8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.620286 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ad61e1e-65fd-40b7-be7f-3e2dc679aee8-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.620295 4747 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ad61e1e-65fd-40b7-be7f-3e2dc679aee8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.729288 4747 generic.go:334] "Generic (PLEG): container finished" podID="3619d791-2346-42f8-8d22-65a669474273" containerID="bbc5ce8bd1546682eb3ecb19afebe740fa2b09f66c7a6ba47a8e6f28028a3127" exitCode=0 Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.729443 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c77444c9-2658j" event={"ID":"3619d791-2346-42f8-8d22-65a669474273","Type":"ContainerDied","Data":"bbc5ce8bd1546682eb3ecb19afebe740fa2b09f66c7a6ba47a8e6f28028a3127"} Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.729489 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c77444c9-2658j" event={"ID":"3619d791-2346-42f8-8d22-65a669474273","Type":"ContainerStarted","Data":"2467c329d8b1a526487196bf208fbe97f303024b9849f568d747819decc8ab68"} Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.734130 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.741220 4747 generic.go:334] "Generic (PLEG): container finished" podID="a57fd65d-825d-4fd1-a5e6-ab244093e1b8" containerID="99d4532c1418e0da8750fbd98f9376279802090c90582a076f704e0a9140f38c" exitCode=0 Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.741911 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d95b78fc9-pqwbl" event={"ID":"a57fd65d-825d-4fd1-a5e6-ab244093e1b8","Type":"ContainerDied","Data":"99d4532c1418e0da8750fbd98f9376279802090c90582a076f704e0a9140f38c"} Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.741969 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d95b78fc9-pqwbl" event={"ID":"a57fd65d-825d-4fd1-a5e6-ab244093e1b8","Type":"ContainerStarted","Data":"f86724a4d7de012ad911f0cb75c54e45a797271517def5c52103c27d8d0eab68"} Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.746633 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-fd65c" event={"ID":"2c51097b-ac52-49ee-95df-440e1567be8b","Type":"ContainerStarted","Data":"c267b0293d2990510552ae08778dea5664cb2e986cfd3442f41ab8f2bda0097b"} Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.746685 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-fd65c" event={"ID":"2c51097b-ac52-49ee-95df-440e1567be8b","Type":"ContainerStarted","Data":"4c070257d926369a55348902966ba606b5c1df33026a71f954bab9e970d44d26"} Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.752771 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4hlv6" event={"ID":"2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6","Type":"ContainerStarted","Data":"4e51ee7677e31866c0a28e0c3f7b91de0b057a1dea64c7b1e3a5f77ef13de453"} Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.760392 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4nb2f" event={"ID":"51fe6b5c-64f2-40fd-aa82-c8b615c8d198","Type":"ContainerStarted","Data":"5035b1e88796f5b0359f747dccec2f3a4fa91d321e342ca56dc644e782351e1f"} Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.760429 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4nb2f" event={"ID":"51fe6b5c-64f2-40fd-aa82-c8b615c8d198","Type":"ContainerStarted","Data":"f71bef8fcea7285724309983dc1e4f462cbba6cd38d6d84c834ed74d1df8f0a2"} Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.761877 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-n8z94" event={"ID":"f285358e-df22-44d4-b3b4-5a2dc69399c6","Type":"ContainerStarted","Data":"c9af89d1a0d9c36a9bfb8d842436adce53b02ec8842456c2f480a0f03d5ba0db"} Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.763601 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"873188a0-9dbb-4c95-b39e-cd503e07e59f","Type":"ContainerStarted","Data":"8ed9150ac3e057462166cf419eefa52777e70251bf023b66f8d3d7778dc03a50"} Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.767493 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qs2mw" event={"ID":"c603fa2b-48da-497f-82c0-9929a9e155a6","Type":"ContainerStarted","Data":"2635b1f6fa95746ace0821c767d78207fe435b53a55b3a6a7b7098f90fe21e5f"} Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.774808 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-zsd2g" event={"ID":"7acaca59-7888-4a05-8eb3-f925f2f8d44b","Type":"ContainerStarted","Data":"17b01294a346719f951e85c8bb608f04c17d6003bbd9bc4b14bd0bf8056e5606"} Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.778093 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6db75979dc-kxl2c" event={"ID":"8ad61e1e-65fd-40b7-be7f-3e2dc679aee8","Type":"ContainerDied","Data":"03c75141a90a51886bc53aad5e3222c9f6ef08073feacc5a02f4f972a6cfce5e"} Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.778138 4747 scope.go:117] "RemoveContainer" containerID="254e5a55b075192d212d437c5a8c992cfe8b47bcddc1542d6080110031c4c6d2" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.778277 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6db75979dc-kxl2c" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.839834 4747 scope.go:117] "RemoveContainer" containerID="bce32a47ded452c6219b2f943f253bdc3274d4289b7867fa9975e28997f7b3d9" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.857098 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-fd65c" podStartSLOduration=1.857077238 podStartE2EDuration="1.857077238s" podCreationTimestamp="2025-12-15 05:52:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:52:06.824135558 +0000 UTC m=+890.520647475" watchObservedRunningTime="2025-12-15 05:52:06.857077238 +0000 UTC m=+890.553589155" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.910547 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-4nb2f" podStartSLOduration=2.910524523 podStartE2EDuration="2.910524523s" podCreationTimestamp="2025-12-15 05:52:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:52:06.88161811 +0000 UTC m=+890.578130027" watchObservedRunningTime="2025-12-15 05:52:06.910524523 +0000 UTC m=+890.607036440" Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.976044 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 15 05:52:06 crc kubenswrapper[4747]: I1215 05:52:06.985387 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qs2mw" podStartSLOduration=2.878722702 podStartE2EDuration="5.985366963s" podCreationTimestamp="2025-12-15 05:52:01 +0000 UTC" firstStartedPulling="2025-12-15 05:52:02.636520984 +0000 UTC m=+886.333032901" lastFinishedPulling="2025-12-15 05:52:05.743165245 +0000 UTC m=+889.439677162" observedRunningTime="2025-12-15 05:52:06.949340561 +0000 UTC m=+890.645852479" watchObservedRunningTime="2025-12-15 05:52:06.985366963 +0000 UTC m=+890.681878880" Dec 15 05:52:07 crc kubenswrapper[4747]: I1215 05:52:07.012415 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6db75979dc-kxl2c"] Dec 15 05:52:07 crc kubenswrapper[4747]: I1215 05:52:07.028090 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6db75979dc-kxl2c"] Dec 15 05:52:07 crc kubenswrapper[4747]: I1215 05:52:07.068393 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 15 05:52:07 crc kubenswrapper[4747]: I1215 05:52:07.156521 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 15 05:52:07 crc kubenswrapper[4747]: I1215 05:52:07.282061 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c77444c9-2658j" Dec 15 05:52:07 crc kubenswrapper[4747]: I1215 05:52:07.358695 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 15 05:52:07 crc kubenswrapper[4747]: I1215 05:52:07.360889 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3619d791-2346-42f8-8d22-65a669474273-dns-svc\") pod \"3619d791-2346-42f8-8d22-65a669474273\" (UID: \"3619d791-2346-42f8-8d22-65a669474273\") " Dec 15 05:52:07 crc kubenswrapper[4747]: I1215 05:52:07.369478 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxrpg\" (UniqueName: \"kubernetes.io/projected/3619d791-2346-42f8-8d22-65a669474273-kube-api-access-fxrpg\") pod \"3619d791-2346-42f8-8d22-65a669474273\" (UID: \"3619d791-2346-42f8-8d22-65a669474273\") " Dec 15 05:52:07 crc kubenswrapper[4747]: I1215 05:52:07.369670 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3619d791-2346-42f8-8d22-65a669474273-config\") pod \"3619d791-2346-42f8-8d22-65a669474273\" (UID: \"3619d791-2346-42f8-8d22-65a669474273\") " Dec 15 05:52:07 crc kubenswrapper[4747]: I1215 05:52:07.369708 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3619d791-2346-42f8-8d22-65a669474273-ovsdbserver-sb\") pod \"3619d791-2346-42f8-8d22-65a669474273\" (UID: \"3619d791-2346-42f8-8d22-65a669474273\") " Dec 15 05:52:07 crc kubenswrapper[4747]: I1215 05:52:07.369912 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3619d791-2346-42f8-8d22-65a669474273-ovsdbserver-nb\") pod \"3619d791-2346-42f8-8d22-65a669474273\" (UID: \"3619d791-2346-42f8-8d22-65a669474273\") " Dec 15 05:52:07 crc kubenswrapper[4747]: I1215 05:52:07.369948 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3619d791-2346-42f8-8d22-65a669474273-dns-swift-storage-0\") pod \"3619d791-2346-42f8-8d22-65a669474273\" (UID: \"3619d791-2346-42f8-8d22-65a669474273\") " Dec 15 05:52:07 crc kubenswrapper[4747]: I1215 05:52:07.410708 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3619d791-2346-42f8-8d22-65a669474273-kube-api-access-fxrpg" (OuterVolumeSpecName: "kube-api-access-fxrpg") pod "3619d791-2346-42f8-8d22-65a669474273" (UID: "3619d791-2346-42f8-8d22-65a669474273"). InnerVolumeSpecName "kube-api-access-fxrpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:52:07 crc kubenswrapper[4747]: I1215 05:52:07.430216 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3619d791-2346-42f8-8d22-65a669474273-config" (OuterVolumeSpecName: "config") pod "3619d791-2346-42f8-8d22-65a669474273" (UID: "3619d791-2346-42f8-8d22-65a669474273"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:52:07 crc kubenswrapper[4747]: I1215 05:52:07.430230 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3619d791-2346-42f8-8d22-65a669474273-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3619d791-2346-42f8-8d22-65a669474273" (UID: "3619d791-2346-42f8-8d22-65a669474273"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:52:07 crc kubenswrapper[4747]: I1215 05:52:07.435440 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3619d791-2346-42f8-8d22-65a669474273-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3619d791-2346-42f8-8d22-65a669474273" (UID: "3619d791-2346-42f8-8d22-65a669474273"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:52:07 crc kubenswrapper[4747]: I1215 05:52:07.439378 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3619d791-2346-42f8-8d22-65a669474273-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3619d791-2346-42f8-8d22-65a669474273" (UID: "3619d791-2346-42f8-8d22-65a669474273"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:52:07 crc kubenswrapper[4747]: I1215 05:52:07.473012 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3619d791-2346-42f8-8d22-65a669474273-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:07 crc kubenswrapper[4747]: I1215 05:52:07.473302 4747 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3619d791-2346-42f8-8d22-65a669474273-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:07 crc kubenswrapper[4747]: I1215 05:52:07.473315 4747 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3619d791-2346-42f8-8d22-65a669474273-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:07 crc kubenswrapper[4747]: I1215 05:52:07.473326 4747 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3619d791-2346-42f8-8d22-65a669474273-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:07 crc kubenswrapper[4747]: I1215 05:52:07.473335 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxrpg\" (UniqueName: \"kubernetes.io/projected/3619d791-2346-42f8-8d22-65a669474273-kube-api-access-fxrpg\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:07 crc kubenswrapper[4747]: I1215 05:52:07.492477 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3619d791-2346-42f8-8d22-65a669474273-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3619d791-2346-42f8-8d22-65a669474273" (UID: "3619d791-2346-42f8-8d22-65a669474273"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:52:07 crc kubenswrapper[4747]: I1215 05:52:07.530318 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 15 05:52:07 crc kubenswrapper[4747]: W1215 05:52:07.567220 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1f189d7_083f_4f7c_b004_bbd6be9ced19.slice/crio-2a5df52846f56580325933251fdd50bd515db4ea4f1bbf9a06bb6ea6e6604c56 WatchSource:0}: Error finding container 2a5df52846f56580325933251fdd50bd515db4ea4f1bbf9a06bb6ea6e6604c56: Status 404 returned error can't find the container with id 2a5df52846f56580325933251fdd50bd515db4ea4f1bbf9a06bb6ea6e6604c56 Dec 15 05:52:07 crc kubenswrapper[4747]: I1215 05:52:07.574939 4747 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3619d791-2346-42f8-8d22-65a669474273-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:07 crc kubenswrapper[4747]: I1215 05:52:07.806410 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c77444c9-2658j" event={"ID":"3619d791-2346-42f8-8d22-65a669474273","Type":"ContainerDied","Data":"2467c329d8b1a526487196bf208fbe97f303024b9849f568d747819decc8ab68"} Dec 15 05:52:07 crc kubenswrapper[4747]: I1215 05:52:07.806477 4747 scope.go:117] "RemoveContainer" containerID="bbc5ce8bd1546682eb3ecb19afebe740fa2b09f66c7a6ba47a8e6f28028a3127" Dec 15 05:52:07 crc kubenswrapper[4747]: I1215 05:52:07.806606 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c77444c9-2658j" Dec 15 05:52:07 crc kubenswrapper[4747]: I1215 05:52:07.848069 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d95b78fc9-pqwbl" event={"ID":"a57fd65d-825d-4fd1-a5e6-ab244093e1b8","Type":"ContainerStarted","Data":"e881a4c63975aef39ce50f2a5c86af3367eef09fecf88f411f006760b4acaf8d"} Dec 15 05:52:07 crc kubenswrapper[4747]: I1215 05:52:07.848825 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d95b78fc9-pqwbl" Dec 15 05:52:07 crc kubenswrapper[4747]: I1215 05:52:07.864961 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e1f189d7-083f-4f7c-b004-bbd6be9ced19","Type":"ContainerStarted","Data":"2a5df52846f56580325933251fdd50bd515db4ea4f1bbf9a06bb6ea6e6604c56"} Dec 15 05:52:07 crc kubenswrapper[4747]: I1215 05:52:07.880360 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c77444c9-2658j"] Dec 15 05:52:07 crc kubenswrapper[4747]: I1215 05:52:07.887665 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57c77444c9-2658j"] Dec 15 05:52:07 crc kubenswrapper[4747]: I1215 05:52:07.888407 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d95b78fc9-pqwbl" podStartSLOduration=2.8883968859999998 podStartE2EDuration="2.888396886s" podCreationTimestamp="2025-12-15 05:52:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:52:07.883615325 +0000 UTC m=+891.580127242" watchObservedRunningTime="2025-12-15 05:52:07.888396886 +0000 UTC m=+891.584908803" Dec 15 05:52:07 crc kubenswrapper[4747]: I1215 05:52:07.901048 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"de019252-e11f-44f8-9f25-72b90aaa0b97","Type":"ContainerStarted","Data":"ca961d97049cbefa960f6d8bb16379f4ce088364af9b60237c3d7eaac8288b74"} Dec 15 05:52:07 crc kubenswrapper[4747]: I1215 05:52:07.974476 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-84m7w"] Dec 15 05:52:07 crc kubenswrapper[4747]: E1215 05:52:07.974893 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3619d791-2346-42f8-8d22-65a669474273" containerName="init" Dec 15 05:52:07 crc kubenswrapper[4747]: I1215 05:52:07.974913 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="3619d791-2346-42f8-8d22-65a669474273" containerName="init" Dec 15 05:52:07 crc kubenswrapper[4747]: E1215 05:52:07.974947 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ad61e1e-65fd-40b7-be7f-3e2dc679aee8" containerName="dnsmasq-dns" Dec 15 05:52:07 crc kubenswrapper[4747]: I1215 05:52:07.974954 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ad61e1e-65fd-40b7-be7f-3e2dc679aee8" containerName="dnsmasq-dns" Dec 15 05:52:07 crc kubenswrapper[4747]: E1215 05:52:07.974977 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ad61e1e-65fd-40b7-be7f-3e2dc679aee8" containerName="init" Dec 15 05:52:07 crc kubenswrapper[4747]: I1215 05:52:07.974983 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ad61e1e-65fd-40b7-be7f-3e2dc679aee8" containerName="init" Dec 15 05:52:07 crc kubenswrapper[4747]: I1215 05:52:07.975192 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="3619d791-2346-42f8-8d22-65a669474273" containerName="init" Dec 15 05:52:07 crc kubenswrapper[4747]: I1215 05:52:07.975209 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ad61e1e-65fd-40b7-be7f-3e2dc679aee8" containerName="dnsmasq-dns" Dec 15 05:52:07 crc kubenswrapper[4747]: I1215 05:52:07.976500 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-84m7w" Dec 15 05:52:07 crc kubenswrapper[4747]: I1215 05:52:07.987462 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-84m7w"] Dec 15 05:52:08 crc kubenswrapper[4747]: I1215 05:52:08.089298 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8515fd4-ce21-4f89-a703-e3807fa6fd90-utilities\") pod \"certified-operators-84m7w\" (UID: \"e8515fd4-ce21-4f89-a703-e3807fa6fd90\") " pod="openshift-marketplace/certified-operators-84m7w" Dec 15 05:52:08 crc kubenswrapper[4747]: I1215 05:52:08.089547 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8515fd4-ce21-4f89-a703-e3807fa6fd90-catalog-content\") pod \"certified-operators-84m7w\" (UID: \"e8515fd4-ce21-4f89-a703-e3807fa6fd90\") " pod="openshift-marketplace/certified-operators-84m7w" Dec 15 05:52:08 crc kubenswrapper[4747]: I1215 05:52:08.089699 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm446\" (UniqueName: \"kubernetes.io/projected/e8515fd4-ce21-4f89-a703-e3807fa6fd90-kube-api-access-cm446\") pod \"certified-operators-84m7w\" (UID: \"e8515fd4-ce21-4f89-a703-e3807fa6fd90\") " pod="openshift-marketplace/certified-operators-84m7w" Dec 15 05:52:08 crc kubenswrapper[4747]: I1215 05:52:08.191707 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8515fd4-ce21-4f89-a703-e3807fa6fd90-utilities\") pod \"certified-operators-84m7w\" (UID: \"e8515fd4-ce21-4f89-a703-e3807fa6fd90\") " pod="openshift-marketplace/certified-operators-84m7w" Dec 15 05:52:08 crc kubenswrapper[4747]: I1215 05:52:08.192146 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8515fd4-ce21-4f89-a703-e3807fa6fd90-catalog-content\") pod \"certified-operators-84m7w\" (UID: \"e8515fd4-ce21-4f89-a703-e3807fa6fd90\") " pod="openshift-marketplace/certified-operators-84m7w" Dec 15 05:52:08 crc kubenswrapper[4747]: I1215 05:52:08.192350 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm446\" (UniqueName: \"kubernetes.io/projected/e8515fd4-ce21-4f89-a703-e3807fa6fd90-kube-api-access-cm446\") pod \"certified-operators-84m7w\" (UID: \"e8515fd4-ce21-4f89-a703-e3807fa6fd90\") " pod="openshift-marketplace/certified-operators-84m7w" Dec 15 05:52:08 crc kubenswrapper[4747]: I1215 05:52:08.194135 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8515fd4-ce21-4f89-a703-e3807fa6fd90-utilities\") pod \"certified-operators-84m7w\" (UID: \"e8515fd4-ce21-4f89-a703-e3807fa6fd90\") " pod="openshift-marketplace/certified-operators-84m7w" Dec 15 05:52:08 crc kubenswrapper[4747]: I1215 05:52:08.194496 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8515fd4-ce21-4f89-a703-e3807fa6fd90-catalog-content\") pod \"certified-operators-84m7w\" (UID: \"e8515fd4-ce21-4f89-a703-e3807fa6fd90\") " pod="openshift-marketplace/certified-operators-84m7w" Dec 15 05:52:08 crc kubenswrapper[4747]: I1215 05:52:08.214877 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm446\" (UniqueName: \"kubernetes.io/projected/e8515fd4-ce21-4f89-a703-e3807fa6fd90-kube-api-access-cm446\") pod \"certified-operators-84m7w\" (UID: \"e8515fd4-ce21-4f89-a703-e3807fa6fd90\") " pod="openshift-marketplace/certified-operators-84m7w" Dec 15 05:52:08 crc kubenswrapper[4747]: I1215 05:52:08.303736 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-84m7w" Dec 15 05:52:08 crc kubenswrapper[4747]: I1215 05:52:08.652197 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3619d791-2346-42f8-8d22-65a669474273" path="/var/lib/kubelet/pods/3619d791-2346-42f8-8d22-65a669474273/volumes" Dec 15 05:52:08 crc kubenswrapper[4747]: I1215 05:52:08.660548 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ad61e1e-65fd-40b7-be7f-3e2dc679aee8" path="/var/lib/kubelet/pods/8ad61e1e-65fd-40b7-be7f-3e2dc679aee8/volumes" Dec 15 05:52:08 crc kubenswrapper[4747]: I1215 05:52:08.887845 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-84m7w"] Dec 15 05:52:08 crc kubenswrapper[4747]: W1215 05:52:08.897698 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8515fd4_ce21_4f89_a703_e3807fa6fd90.slice/crio-2d9c4e83ee7fffbc698983690f48fa8e0961af61f08128cb582d5a7f521a8b4a WatchSource:0}: Error finding container 2d9c4e83ee7fffbc698983690f48fa8e0961af61f08128cb582d5a7f521a8b4a: Status 404 returned error can't find the container with id 2d9c4e83ee7fffbc698983690f48fa8e0961af61f08128cb582d5a7f521a8b4a Dec 15 05:52:08 crc kubenswrapper[4747]: I1215 05:52:08.934530 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e1f189d7-083f-4f7c-b004-bbd6be9ced19","Type":"ContainerStarted","Data":"6f2213bfcc90b38e2a425166186e09261933b5678a4fe53155368f30ab3b3c9d"} Dec 15 05:52:08 crc kubenswrapper[4747]: I1215 05:52:08.948896 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"de019252-e11f-44f8-9f25-72b90aaa0b97","Type":"ContainerStarted","Data":"32f72745f0f98e0da1d41634a860dfaf6b0c207643255b9d261896f913318081"} Dec 15 05:52:08 crc kubenswrapper[4747]: I1215 05:52:08.957169 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84m7w" event={"ID":"e8515fd4-ce21-4f89-a703-e3807fa6fd90","Type":"ContainerStarted","Data":"2d9c4e83ee7fffbc698983690f48fa8e0961af61f08128cb582d5a7f521a8b4a"} Dec 15 05:52:10 crc kubenswrapper[4747]: I1215 05:52:10.973670 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e1f189d7-083f-4f7c-b004-bbd6be9ced19","Type":"ContainerStarted","Data":"1e1076ae4c095fb9b56fc9d72549bf653a0c5aa0bd38748c5df9428a48fa5aff"} Dec 15 05:52:10 crc kubenswrapper[4747]: I1215 05:52:10.973747 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e1f189d7-083f-4f7c-b004-bbd6be9ced19" containerName="glance-log" containerID="cri-o://6f2213bfcc90b38e2a425166186e09261933b5678a4fe53155368f30ab3b3c9d" gracePeriod=30 Dec 15 05:52:10 crc kubenswrapper[4747]: I1215 05:52:10.974829 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e1f189d7-083f-4f7c-b004-bbd6be9ced19" containerName="glance-httpd" containerID="cri-o://1e1076ae4c095fb9b56fc9d72549bf653a0c5aa0bd38748c5df9428a48fa5aff" gracePeriod=30 Dec 15 05:52:10 crc kubenswrapper[4747]: I1215 05:52:10.985661 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"de019252-e11f-44f8-9f25-72b90aaa0b97","Type":"ContainerStarted","Data":"5ef51a8f10cd816d0d64b9670ad498eab0ae16cd051eb8e479657fd1627cd65c"} Dec 15 05:52:10 crc kubenswrapper[4747]: I1215 05:52:10.985695 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="de019252-e11f-44f8-9f25-72b90aaa0b97" containerName="glance-log" containerID="cri-o://32f72745f0f98e0da1d41634a860dfaf6b0c207643255b9d261896f913318081" gracePeriod=30 Dec 15 05:52:10 crc kubenswrapper[4747]: I1215 05:52:10.985767 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="de019252-e11f-44f8-9f25-72b90aaa0b97" containerName="glance-httpd" containerID="cri-o://5ef51a8f10cd816d0d64b9670ad498eab0ae16cd051eb8e479657fd1627cd65c" gracePeriod=30 Dec 15 05:52:10 crc kubenswrapper[4747]: I1215 05:52:10.988085 4747 generic.go:334] "Generic (PLEG): container finished" podID="e8515fd4-ce21-4f89-a703-e3807fa6fd90" containerID="ac9e6142f75de10b94be19dc75729a1cac053a1cb9c59aa4f191b74ed95b1d1b" exitCode=0 Dec 15 05:52:10 crc kubenswrapper[4747]: I1215 05:52:10.988122 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84m7w" event={"ID":"e8515fd4-ce21-4f89-a703-e3807fa6fd90","Type":"ContainerDied","Data":"ac9e6142f75de10b94be19dc75729a1cac053a1cb9c59aa4f191b74ed95b1d1b"} Dec 15 05:52:11 crc kubenswrapper[4747]: I1215 05:52:11.001453 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.001438006 podStartE2EDuration="6.001438006s" podCreationTimestamp="2025-12-15 05:52:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:52:10.992744668 +0000 UTC m=+894.689256585" watchObservedRunningTime="2025-12-15 05:52:11.001438006 +0000 UTC m=+894.697949923" Dec 15 05:52:11 crc kubenswrapper[4747]: I1215 05:52:11.048214 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.048197093 podStartE2EDuration="7.048197093s" podCreationTimestamp="2025-12-15 05:52:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:52:11.038334286 +0000 UTC m=+894.734846202" watchObservedRunningTime="2025-12-15 05:52:11.048197093 +0000 UTC m=+894.744709010" Dec 15 05:52:11 crc kubenswrapper[4747]: I1215 05:52:11.353205 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-25lpq"] Dec 15 05:52:11 crc kubenswrapper[4747]: I1215 05:52:11.355663 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-25lpq" Dec 15 05:52:11 crc kubenswrapper[4747]: I1215 05:52:11.377070 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74hjh\" (UniqueName: \"kubernetes.io/projected/656ef4f9-8e82-43ce-b0f9-b654bcecb12a-kube-api-access-74hjh\") pod \"community-operators-25lpq\" (UID: \"656ef4f9-8e82-43ce-b0f9-b654bcecb12a\") " pod="openshift-marketplace/community-operators-25lpq" Dec 15 05:52:11 crc kubenswrapper[4747]: I1215 05:52:11.377153 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/656ef4f9-8e82-43ce-b0f9-b654bcecb12a-catalog-content\") pod \"community-operators-25lpq\" (UID: \"656ef4f9-8e82-43ce-b0f9-b654bcecb12a\") " pod="openshift-marketplace/community-operators-25lpq" Dec 15 05:52:11 crc kubenswrapper[4747]: I1215 05:52:11.377190 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/656ef4f9-8e82-43ce-b0f9-b654bcecb12a-utilities\") pod \"community-operators-25lpq\" (UID: \"656ef4f9-8e82-43ce-b0f9-b654bcecb12a\") " pod="openshift-marketplace/community-operators-25lpq" Dec 15 05:52:11 crc kubenswrapper[4747]: I1215 05:52:11.381684 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-25lpq"] Dec 15 05:52:11 crc kubenswrapper[4747]: I1215 05:52:11.478967 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/656ef4f9-8e82-43ce-b0f9-b654bcecb12a-utilities\") pod \"community-operators-25lpq\" (UID: \"656ef4f9-8e82-43ce-b0f9-b654bcecb12a\") " pod="openshift-marketplace/community-operators-25lpq" Dec 15 05:52:11 crc kubenswrapper[4747]: I1215 05:52:11.478975 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/656ef4f9-8e82-43ce-b0f9-b654bcecb12a-utilities\") pod \"community-operators-25lpq\" (UID: \"656ef4f9-8e82-43ce-b0f9-b654bcecb12a\") " pod="openshift-marketplace/community-operators-25lpq" Dec 15 05:52:11 crc kubenswrapper[4747]: I1215 05:52:11.479201 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74hjh\" (UniqueName: \"kubernetes.io/projected/656ef4f9-8e82-43ce-b0f9-b654bcecb12a-kube-api-access-74hjh\") pod \"community-operators-25lpq\" (UID: \"656ef4f9-8e82-43ce-b0f9-b654bcecb12a\") " pod="openshift-marketplace/community-operators-25lpq" Dec 15 05:52:11 crc kubenswrapper[4747]: I1215 05:52:11.479264 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/656ef4f9-8e82-43ce-b0f9-b654bcecb12a-catalog-content\") pod \"community-operators-25lpq\" (UID: \"656ef4f9-8e82-43ce-b0f9-b654bcecb12a\") " pod="openshift-marketplace/community-operators-25lpq" Dec 15 05:52:11 crc kubenswrapper[4747]: I1215 05:52:11.479518 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/656ef4f9-8e82-43ce-b0f9-b654bcecb12a-catalog-content\") pod \"community-operators-25lpq\" (UID: \"656ef4f9-8e82-43ce-b0f9-b654bcecb12a\") " pod="openshift-marketplace/community-operators-25lpq" Dec 15 05:52:11 crc kubenswrapper[4747]: I1215 05:52:11.491973 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qs2mw" Dec 15 05:52:11 crc kubenswrapper[4747]: I1215 05:52:11.492010 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qs2mw" Dec 15 05:52:11 crc kubenswrapper[4747]: I1215 05:52:11.522345 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74hjh\" (UniqueName: \"kubernetes.io/projected/656ef4f9-8e82-43ce-b0f9-b654bcecb12a-kube-api-access-74hjh\") pod \"community-operators-25lpq\" (UID: \"656ef4f9-8e82-43ce-b0f9-b654bcecb12a\") " pod="openshift-marketplace/community-operators-25lpq" Dec 15 05:52:11 crc kubenswrapper[4747]: I1215 05:52:11.538684 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qs2mw" Dec 15 05:52:11 crc kubenswrapper[4747]: I1215 05:52:11.680180 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-25lpq" Dec 15 05:52:11 crc kubenswrapper[4747]: I1215 05:52:11.998800 4747 generic.go:334] "Generic (PLEG): container finished" podID="51fe6b5c-64f2-40fd-aa82-c8b615c8d198" containerID="5035b1e88796f5b0359f747dccec2f3a4fa91d321e342ca56dc644e782351e1f" exitCode=0 Dec 15 05:52:11 crc kubenswrapper[4747]: I1215 05:52:11.998885 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4nb2f" event={"ID":"51fe6b5c-64f2-40fd-aa82-c8b615c8d198","Type":"ContainerDied","Data":"5035b1e88796f5b0359f747dccec2f3a4fa91d321e342ca56dc644e782351e1f"} Dec 15 05:52:12 crc kubenswrapper[4747]: I1215 05:52:12.007023 4747 generic.go:334] "Generic (PLEG): container finished" podID="e1f189d7-083f-4f7c-b004-bbd6be9ced19" containerID="1e1076ae4c095fb9b56fc9d72549bf653a0c5aa0bd38748c5df9428a48fa5aff" exitCode=0 Dec 15 05:52:12 crc kubenswrapper[4747]: I1215 05:52:12.007047 4747 generic.go:334] "Generic (PLEG): container finished" podID="e1f189d7-083f-4f7c-b004-bbd6be9ced19" containerID="6f2213bfcc90b38e2a425166186e09261933b5678a4fe53155368f30ab3b3c9d" exitCode=143 Dec 15 05:52:12 crc kubenswrapper[4747]: I1215 05:52:12.007083 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e1f189d7-083f-4f7c-b004-bbd6be9ced19","Type":"ContainerDied","Data":"1e1076ae4c095fb9b56fc9d72549bf653a0c5aa0bd38748c5df9428a48fa5aff"} Dec 15 05:52:12 crc kubenswrapper[4747]: I1215 05:52:12.007103 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e1f189d7-083f-4f7c-b004-bbd6be9ced19","Type":"ContainerDied","Data":"6f2213bfcc90b38e2a425166186e09261933b5678a4fe53155368f30ab3b3c9d"} Dec 15 05:52:12 crc kubenswrapper[4747]: I1215 05:52:12.010154 4747 generic.go:334] "Generic (PLEG): container finished" podID="de019252-e11f-44f8-9f25-72b90aaa0b97" containerID="5ef51a8f10cd816d0d64b9670ad498eab0ae16cd051eb8e479657fd1627cd65c" exitCode=0 Dec 15 05:52:12 crc kubenswrapper[4747]: I1215 05:52:12.010187 4747 generic.go:334] "Generic (PLEG): container finished" podID="de019252-e11f-44f8-9f25-72b90aaa0b97" containerID="32f72745f0f98e0da1d41634a860dfaf6b0c207643255b9d261896f913318081" exitCode=143 Dec 15 05:52:12 crc kubenswrapper[4747]: I1215 05:52:12.010268 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"de019252-e11f-44f8-9f25-72b90aaa0b97","Type":"ContainerDied","Data":"5ef51a8f10cd816d0d64b9670ad498eab0ae16cd051eb8e479657fd1627cd65c"} Dec 15 05:52:12 crc kubenswrapper[4747]: I1215 05:52:12.010338 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"de019252-e11f-44f8-9f25-72b90aaa0b97","Type":"ContainerDied","Data":"32f72745f0f98e0da1d41634a860dfaf6b0c207643255b9d261896f913318081"} Dec 15 05:52:12 crc kubenswrapper[4747]: I1215 05:52:12.053323 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qs2mw" Dec 15 05:52:14 crc kubenswrapper[4747]: I1215 05:52:14.347874 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qs2mw"] Dec 15 05:52:14 crc kubenswrapper[4747]: I1215 05:52:14.349533 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qs2mw" podUID="c603fa2b-48da-497f-82c0-9929a9e155a6" containerName="registry-server" containerID="cri-o://2635b1f6fa95746ace0821c767d78207fe435b53a55b3a6a7b7098f90fe21e5f" gracePeriod=2 Dec 15 05:52:15 crc kubenswrapper[4747]: I1215 05:52:15.088726 4747 generic.go:334] "Generic (PLEG): container finished" podID="c603fa2b-48da-497f-82c0-9929a9e155a6" containerID="2635b1f6fa95746ace0821c767d78207fe435b53a55b3a6a7b7098f90fe21e5f" exitCode=0 Dec 15 05:52:15 crc kubenswrapper[4747]: I1215 05:52:15.088803 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qs2mw" event={"ID":"c603fa2b-48da-497f-82c0-9929a9e155a6","Type":"ContainerDied","Data":"2635b1f6fa95746ace0821c767d78207fe435b53a55b3a6a7b7098f90fe21e5f"} Dec 15 05:52:15 crc kubenswrapper[4747]: I1215 05:52:15.761512 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d95b78fc9-pqwbl" Dec 15 05:52:15 crc kubenswrapper[4747]: I1215 05:52:15.813438 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67fb8b8965-6vtwl"] Dec 15 05:52:15 crc kubenswrapper[4747]: I1215 05:52:15.813687 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67fb8b8965-6vtwl" podUID="bade9597-335c-43a4-9477-ab4f08999fa8" containerName="dnsmasq-dns" containerID="cri-o://f52a5430d4461fd55c11ee7a0137c08c01ef2af7d98f22774a239b372cbabebd" gracePeriod=10 Dec 15 05:52:16 crc kubenswrapper[4747]: I1215 05:52:16.101640 4747 generic.go:334] "Generic (PLEG): container finished" podID="2c51097b-ac52-49ee-95df-440e1567be8b" containerID="c267b0293d2990510552ae08778dea5664cb2e986cfd3442f41ab8f2bda0097b" exitCode=0 Dec 15 05:52:16 crc kubenswrapper[4747]: I1215 05:52:16.101717 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-fd65c" event={"ID":"2c51097b-ac52-49ee-95df-440e1567be8b","Type":"ContainerDied","Data":"c267b0293d2990510552ae08778dea5664cb2e986cfd3442f41ab8f2bda0097b"} Dec 15 05:52:16 crc kubenswrapper[4747]: I1215 05:52:16.757558 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-67fb8b8965-6vtwl" podUID="bade9597-335c-43a4-9477-ab4f08999fa8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.107:5353: connect: connection refused" Dec 15 05:52:16 crc kubenswrapper[4747]: I1215 05:52:16.759247 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5tm7d"] Dec 15 05:52:16 crc kubenswrapper[4747]: I1215 05:52:16.764790 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5tm7d" Dec 15 05:52:16 crc kubenswrapper[4747]: I1215 05:52:16.778250 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5tm7d"] Dec 15 05:52:16 crc kubenswrapper[4747]: I1215 05:52:16.797294 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aa44578-a974-4c1f-90db-014ecf544678-catalog-content\") pod \"redhat-operators-5tm7d\" (UID: \"7aa44578-a974-4c1f-90db-014ecf544678\") " pod="openshift-marketplace/redhat-operators-5tm7d" Dec 15 05:52:16 crc kubenswrapper[4747]: I1215 05:52:16.797411 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm6zg\" (UniqueName: \"kubernetes.io/projected/7aa44578-a974-4c1f-90db-014ecf544678-kube-api-access-wm6zg\") pod \"redhat-operators-5tm7d\" (UID: \"7aa44578-a974-4c1f-90db-014ecf544678\") " pod="openshift-marketplace/redhat-operators-5tm7d" Dec 15 05:52:16 crc kubenswrapper[4747]: I1215 05:52:16.797437 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aa44578-a974-4c1f-90db-014ecf544678-utilities\") pod \"redhat-operators-5tm7d\" (UID: \"7aa44578-a974-4c1f-90db-014ecf544678\") " pod="openshift-marketplace/redhat-operators-5tm7d" Dec 15 05:52:16 crc kubenswrapper[4747]: I1215 05:52:16.899478 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm6zg\" (UniqueName: \"kubernetes.io/projected/7aa44578-a974-4c1f-90db-014ecf544678-kube-api-access-wm6zg\") pod \"redhat-operators-5tm7d\" (UID: \"7aa44578-a974-4c1f-90db-014ecf544678\") " pod="openshift-marketplace/redhat-operators-5tm7d" Dec 15 05:52:16 crc kubenswrapper[4747]: I1215 05:52:16.899537 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aa44578-a974-4c1f-90db-014ecf544678-utilities\") pod \"redhat-operators-5tm7d\" (UID: \"7aa44578-a974-4c1f-90db-014ecf544678\") " pod="openshift-marketplace/redhat-operators-5tm7d" Dec 15 05:52:16 crc kubenswrapper[4747]: I1215 05:52:16.899715 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aa44578-a974-4c1f-90db-014ecf544678-catalog-content\") pod \"redhat-operators-5tm7d\" (UID: \"7aa44578-a974-4c1f-90db-014ecf544678\") " pod="openshift-marketplace/redhat-operators-5tm7d" Dec 15 05:52:16 crc kubenswrapper[4747]: I1215 05:52:16.900222 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aa44578-a974-4c1f-90db-014ecf544678-utilities\") pod \"redhat-operators-5tm7d\" (UID: \"7aa44578-a974-4c1f-90db-014ecf544678\") " pod="openshift-marketplace/redhat-operators-5tm7d" Dec 15 05:52:16 crc kubenswrapper[4747]: I1215 05:52:16.900292 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aa44578-a974-4c1f-90db-014ecf544678-catalog-content\") pod \"redhat-operators-5tm7d\" (UID: \"7aa44578-a974-4c1f-90db-014ecf544678\") " pod="openshift-marketplace/redhat-operators-5tm7d" Dec 15 05:52:16 crc kubenswrapper[4747]: I1215 05:52:16.923292 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm6zg\" (UniqueName: \"kubernetes.io/projected/7aa44578-a974-4c1f-90db-014ecf544678-kube-api-access-wm6zg\") pod \"redhat-operators-5tm7d\" (UID: \"7aa44578-a974-4c1f-90db-014ecf544678\") " pod="openshift-marketplace/redhat-operators-5tm7d" Dec 15 05:52:17 crc kubenswrapper[4747]: I1215 05:52:17.085002 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5tm7d" Dec 15 05:52:17 crc kubenswrapper[4747]: I1215 05:52:17.124548 4747 generic.go:334] "Generic (PLEG): container finished" podID="bade9597-335c-43a4-9477-ab4f08999fa8" containerID="f52a5430d4461fd55c11ee7a0137c08c01ef2af7d98f22774a239b372cbabebd" exitCode=0 Dec 15 05:52:17 crc kubenswrapper[4747]: I1215 05:52:17.124623 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fb8b8965-6vtwl" event={"ID":"bade9597-335c-43a4-9477-ab4f08999fa8","Type":"ContainerDied","Data":"f52a5430d4461fd55c11ee7a0137c08c01ef2af7d98f22774a239b372cbabebd"} Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.138695 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4nb2f" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.154121 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.176541 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crm4f\" (UniqueName: \"kubernetes.io/projected/de019252-e11f-44f8-9f25-72b90aaa0b97-kube-api-access-crm4f\") pod \"de019252-e11f-44f8-9f25-72b90aaa0b97\" (UID: \"de019252-e11f-44f8-9f25-72b90aaa0b97\") " Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.176630 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51fe6b5c-64f2-40fd-aa82-c8b615c8d198-config-data\") pod \"51fe6b5c-64f2-40fd-aa82-c8b615c8d198\" (UID: \"51fe6b5c-64f2-40fd-aa82-c8b615c8d198\") " Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.176680 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/de019252-e11f-44f8-9f25-72b90aaa0b97-public-tls-certs\") pod \"de019252-e11f-44f8-9f25-72b90aaa0b97\" (UID: \"de019252-e11f-44f8-9f25-72b90aaa0b97\") " Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.176717 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/de019252-e11f-44f8-9f25-72b90aaa0b97-httpd-run\") pod \"de019252-e11f-44f8-9f25-72b90aaa0b97\" (UID: \"de019252-e11f-44f8-9f25-72b90aaa0b97\") " Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.176776 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51fe6b5c-64f2-40fd-aa82-c8b615c8d198-combined-ca-bundle\") pod \"51fe6b5c-64f2-40fd-aa82-c8b615c8d198\" (UID: \"51fe6b5c-64f2-40fd-aa82-c8b615c8d198\") " Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.176827 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de019252-e11f-44f8-9f25-72b90aaa0b97-config-data\") pod \"de019252-e11f-44f8-9f25-72b90aaa0b97\" (UID: \"de019252-e11f-44f8-9f25-72b90aaa0b97\") " Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.176849 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de019252-e11f-44f8-9f25-72b90aaa0b97-combined-ca-bundle\") pod \"de019252-e11f-44f8-9f25-72b90aaa0b97\" (UID: \"de019252-e11f-44f8-9f25-72b90aaa0b97\") " Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.176966 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/51fe6b5c-64f2-40fd-aa82-c8b615c8d198-fernet-keys\") pod \"51fe6b5c-64f2-40fd-aa82-c8b615c8d198\" (UID: \"51fe6b5c-64f2-40fd-aa82-c8b615c8d198\") " Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.177008 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de019252-e11f-44f8-9f25-72b90aaa0b97-scripts\") pod \"de019252-e11f-44f8-9f25-72b90aaa0b97\" (UID: \"de019252-e11f-44f8-9f25-72b90aaa0b97\") " Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.177044 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51fe6b5c-64f2-40fd-aa82-c8b615c8d198-scripts\") pod \"51fe6b5c-64f2-40fd-aa82-c8b615c8d198\" (UID: \"51fe6b5c-64f2-40fd-aa82-c8b615c8d198\") " Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.177073 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtqkf\" (UniqueName: \"kubernetes.io/projected/51fe6b5c-64f2-40fd-aa82-c8b615c8d198-kube-api-access-wtqkf\") pod \"51fe6b5c-64f2-40fd-aa82-c8b615c8d198\" (UID: \"51fe6b5c-64f2-40fd-aa82-c8b615c8d198\") " Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.177102 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"de019252-e11f-44f8-9f25-72b90aaa0b97\" (UID: \"de019252-e11f-44f8-9f25-72b90aaa0b97\") " Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.177133 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/51fe6b5c-64f2-40fd-aa82-c8b615c8d198-credential-keys\") pod \"51fe6b5c-64f2-40fd-aa82-c8b615c8d198\" (UID: \"51fe6b5c-64f2-40fd-aa82-c8b615c8d198\") " Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.177337 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de019252-e11f-44f8-9f25-72b90aaa0b97-logs\") pod \"de019252-e11f-44f8-9f25-72b90aaa0b97\" (UID: \"de019252-e11f-44f8-9f25-72b90aaa0b97\") " Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.180351 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de019252-e11f-44f8-9f25-72b90aaa0b97-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "de019252-e11f-44f8-9f25-72b90aaa0b97" (UID: "de019252-e11f-44f8-9f25-72b90aaa0b97"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.181309 4747 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/de019252-e11f-44f8-9f25-72b90aaa0b97-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.181610 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de019252-e11f-44f8-9f25-72b90aaa0b97-logs" (OuterVolumeSpecName: "logs") pod "de019252-e11f-44f8-9f25-72b90aaa0b97" (UID: "de019252-e11f-44f8-9f25-72b90aaa0b97"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.181797 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"de019252-e11f-44f8-9f25-72b90aaa0b97","Type":"ContainerDied","Data":"ca961d97049cbefa960f6d8bb16379f4ce088364af9b60237c3d7eaac8288b74"} Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.181855 4747 scope.go:117] "RemoveContainer" containerID="5ef51a8f10cd816d0d64b9670ad498eab0ae16cd051eb8e479657fd1627cd65c" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.182300 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.183944 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.187975 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-fd65c" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.195267 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51fe6b5c-64f2-40fd-aa82-c8b615c8d198-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "51fe6b5c-64f2-40fd-aa82-c8b615c8d198" (UID: "51fe6b5c-64f2-40fd-aa82-c8b615c8d198"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.195305 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de019252-e11f-44f8-9f25-72b90aaa0b97-scripts" (OuterVolumeSpecName: "scripts") pod "de019252-e11f-44f8-9f25-72b90aaa0b97" (UID: "de019252-e11f-44f8-9f25-72b90aaa0b97"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.195310 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51fe6b5c-64f2-40fd-aa82-c8b615c8d198-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "51fe6b5c-64f2-40fd-aa82-c8b615c8d198" (UID: "51fe6b5c-64f2-40fd-aa82-c8b615c8d198"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.195617 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51fe6b5c-64f2-40fd-aa82-c8b615c8d198-scripts" (OuterVolumeSpecName: "scripts") pod "51fe6b5c-64f2-40fd-aa82-c8b615c8d198" (UID: "51fe6b5c-64f2-40fd-aa82-c8b615c8d198"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.196966 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51fe6b5c-64f2-40fd-aa82-c8b615c8d198-kube-api-access-wtqkf" (OuterVolumeSpecName: "kube-api-access-wtqkf") pod "51fe6b5c-64f2-40fd-aa82-c8b615c8d198" (UID: "51fe6b5c-64f2-40fd-aa82-c8b615c8d198"). InnerVolumeSpecName "kube-api-access-wtqkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.197652 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-fd65c" event={"ID":"2c51097b-ac52-49ee-95df-440e1567be8b","Type":"ContainerDied","Data":"4c070257d926369a55348902966ba606b5c1df33026a71f954bab9e970d44d26"} Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.197682 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c070257d926369a55348902966ba606b5c1df33026a71f954bab9e970d44d26" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.198726 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de019252-e11f-44f8-9f25-72b90aaa0b97-kube-api-access-crm4f" (OuterVolumeSpecName: "kube-api-access-crm4f") pod "de019252-e11f-44f8-9f25-72b90aaa0b97" (UID: "de019252-e11f-44f8-9f25-72b90aaa0b97"). InnerVolumeSpecName "kube-api-access-crm4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.199697 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4nb2f" event={"ID":"51fe6b5c-64f2-40fd-aa82-c8b615c8d198","Type":"ContainerDied","Data":"f71bef8fcea7285724309983dc1e4f462cbba6cd38d6d84c834ed74d1df8f0a2"} Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.199730 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f71bef8fcea7285724309983dc1e4f462cbba6cd38d6d84c834ed74d1df8f0a2" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.199774 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4nb2f" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.201217 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "de019252-e11f-44f8-9f25-72b90aaa0b97" (UID: "de019252-e11f-44f8-9f25-72b90aaa0b97"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.205671 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e1f189d7-083f-4f7c-b004-bbd6be9ced19","Type":"ContainerDied","Data":"2a5df52846f56580325933251fdd50bd515db4ea4f1bbf9a06bb6ea6e6604c56"} Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.205720 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.211286 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de019252-e11f-44f8-9f25-72b90aaa0b97-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de019252-e11f-44f8-9f25-72b90aaa0b97" (UID: "de019252-e11f-44f8-9f25-72b90aaa0b97"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.238943 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51fe6b5c-64f2-40fd-aa82-c8b615c8d198-config-data" (OuterVolumeSpecName: "config-data") pod "51fe6b5c-64f2-40fd-aa82-c8b615c8d198" (UID: "51fe6b5c-64f2-40fd-aa82-c8b615c8d198"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.241893 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51fe6b5c-64f2-40fd-aa82-c8b615c8d198-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "51fe6b5c-64f2-40fd-aa82-c8b615c8d198" (UID: "51fe6b5c-64f2-40fd-aa82-c8b615c8d198"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.245289 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de019252-e11f-44f8-9f25-72b90aaa0b97-config-data" (OuterVolumeSpecName: "config-data") pod "de019252-e11f-44f8-9f25-72b90aaa0b97" (UID: "de019252-e11f-44f8-9f25-72b90aaa0b97"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.246188 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de019252-e11f-44f8-9f25-72b90aaa0b97-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "de019252-e11f-44f8-9f25-72b90aaa0b97" (UID: "de019252-e11f-44f8-9f25-72b90aaa0b97"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.282798 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vrj4\" (UniqueName: \"kubernetes.io/projected/2c51097b-ac52-49ee-95df-440e1567be8b-kube-api-access-2vrj4\") pod \"2c51097b-ac52-49ee-95df-440e1567be8b\" (UID: \"2c51097b-ac52-49ee-95df-440e1567be8b\") " Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.282873 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1f189d7-083f-4f7c-b004-bbd6be9ced19-combined-ca-bundle\") pod \"e1f189d7-083f-4f7c-b004-bbd6be9ced19\" (UID: \"e1f189d7-083f-4f7c-b004-bbd6be9ced19\") " Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.282914 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1f189d7-083f-4f7c-b004-bbd6be9ced19-internal-tls-certs\") pod \"e1f189d7-083f-4f7c-b004-bbd6be9ced19\" (UID: \"e1f189d7-083f-4f7c-b004-bbd6be9ced19\") " Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.282978 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"e1f189d7-083f-4f7c-b004-bbd6be9ced19\" (UID: \"e1f189d7-083f-4f7c-b004-bbd6be9ced19\") " Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.283015 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c51097b-ac52-49ee-95df-440e1567be8b-combined-ca-bundle\") pod \"2c51097b-ac52-49ee-95df-440e1567be8b\" (UID: \"2c51097b-ac52-49ee-95df-440e1567be8b\") " Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.283086 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2c51097b-ac52-49ee-95df-440e1567be8b-config\") pod \"2c51097b-ac52-49ee-95df-440e1567be8b\" (UID: \"2c51097b-ac52-49ee-95df-440e1567be8b\") " Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.283119 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1f189d7-083f-4f7c-b004-bbd6be9ced19-config-data\") pod \"e1f189d7-083f-4f7c-b004-bbd6be9ced19\" (UID: \"e1f189d7-083f-4f7c-b004-bbd6be9ced19\") " Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.283140 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8r54d\" (UniqueName: \"kubernetes.io/projected/e1f189d7-083f-4f7c-b004-bbd6be9ced19-kube-api-access-8r54d\") pod \"e1f189d7-083f-4f7c-b004-bbd6be9ced19\" (UID: \"e1f189d7-083f-4f7c-b004-bbd6be9ced19\") " Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.283176 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1f189d7-083f-4f7c-b004-bbd6be9ced19-logs\") pod \"e1f189d7-083f-4f7c-b004-bbd6be9ced19\" (UID: \"e1f189d7-083f-4f7c-b004-bbd6be9ced19\") " Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.283272 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1f189d7-083f-4f7c-b004-bbd6be9ced19-scripts\") pod \"e1f189d7-083f-4f7c-b004-bbd6be9ced19\" (UID: \"e1f189d7-083f-4f7c-b004-bbd6be9ced19\") " Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.283308 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e1f189d7-083f-4f7c-b004-bbd6be9ced19-httpd-run\") pod \"e1f189d7-083f-4f7c-b004-bbd6be9ced19\" (UID: \"e1f189d7-083f-4f7c-b004-bbd6be9ced19\") " Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.283846 4747 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/51fe6b5c-64f2-40fd-aa82-c8b615c8d198-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.283865 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de019252-e11f-44f8-9f25-72b90aaa0b97-scripts\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.283874 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51fe6b5c-64f2-40fd-aa82-c8b615c8d198-scripts\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.283885 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtqkf\" (UniqueName: \"kubernetes.io/projected/51fe6b5c-64f2-40fd-aa82-c8b615c8d198-kube-api-access-wtqkf\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.283908 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.283918 4747 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/51fe6b5c-64f2-40fd-aa82-c8b615c8d198-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.283938 4747 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de019252-e11f-44f8-9f25-72b90aaa0b97-logs\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.283947 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crm4f\" (UniqueName: \"kubernetes.io/projected/de019252-e11f-44f8-9f25-72b90aaa0b97-kube-api-access-crm4f\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.283957 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51fe6b5c-64f2-40fd-aa82-c8b615c8d198-config-data\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.283965 4747 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/de019252-e11f-44f8-9f25-72b90aaa0b97-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.283974 4747 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51fe6b5c-64f2-40fd-aa82-c8b615c8d198-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.283983 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de019252-e11f-44f8-9f25-72b90aaa0b97-config-data\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.283994 4747 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de019252-e11f-44f8-9f25-72b90aaa0b97-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.298816 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "e1f189d7-083f-4f7c-b004-bbd6be9ced19" (UID: "e1f189d7-083f-4f7c-b004-bbd6be9ced19"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.299133 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1f189d7-083f-4f7c-b004-bbd6be9ced19-logs" (OuterVolumeSpecName: "logs") pod "e1f189d7-083f-4f7c-b004-bbd6be9ced19" (UID: "e1f189d7-083f-4f7c-b004-bbd6be9ced19"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.299380 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1f189d7-083f-4f7c-b004-bbd6be9ced19-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e1f189d7-083f-4f7c-b004-bbd6be9ced19" (UID: "e1f189d7-083f-4f7c-b004-bbd6be9ced19"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.299916 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.305210 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c51097b-ac52-49ee-95df-440e1567be8b-kube-api-access-2vrj4" (OuterVolumeSpecName: "kube-api-access-2vrj4") pod "2c51097b-ac52-49ee-95df-440e1567be8b" (UID: "2c51097b-ac52-49ee-95df-440e1567be8b"). InnerVolumeSpecName "kube-api-access-2vrj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.305675 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1f189d7-083f-4f7c-b004-bbd6be9ced19-kube-api-access-8r54d" (OuterVolumeSpecName: "kube-api-access-8r54d") pod "e1f189d7-083f-4f7c-b004-bbd6be9ced19" (UID: "e1f189d7-083f-4f7c-b004-bbd6be9ced19"). InnerVolumeSpecName "kube-api-access-8r54d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.307129 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1f189d7-083f-4f7c-b004-bbd6be9ced19-scripts" (OuterVolumeSpecName: "scripts") pod "e1f189d7-083f-4f7c-b004-bbd6be9ced19" (UID: "e1f189d7-083f-4f7c-b004-bbd6be9ced19"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.313068 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1f189d7-083f-4f7c-b004-bbd6be9ced19-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e1f189d7-083f-4f7c-b004-bbd6be9ced19" (UID: "e1f189d7-083f-4f7c-b004-bbd6be9ced19"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.321124 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c51097b-ac52-49ee-95df-440e1567be8b-config" (OuterVolumeSpecName: "config") pod "2c51097b-ac52-49ee-95df-440e1567be8b" (UID: "2c51097b-ac52-49ee-95df-440e1567be8b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.323604 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c51097b-ac52-49ee-95df-440e1567be8b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c51097b-ac52-49ee-95df-440e1567be8b" (UID: "2c51097b-ac52-49ee-95df-440e1567be8b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.333098 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1f189d7-083f-4f7c-b004-bbd6be9ced19-config-data" (OuterVolumeSpecName: "config-data") pod "e1f189d7-083f-4f7c-b004-bbd6be9ced19" (UID: "e1f189d7-083f-4f7c-b004-bbd6be9ced19"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.348506 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1f189d7-083f-4f7c-b004-bbd6be9ced19-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e1f189d7-083f-4f7c-b004-bbd6be9ced19" (UID: "e1f189d7-083f-4f7c-b004-bbd6be9ced19"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.388651 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vrj4\" (UniqueName: \"kubernetes.io/projected/2c51097b-ac52-49ee-95df-440e1567be8b-kube-api-access-2vrj4\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.388720 4747 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1f189d7-083f-4f7c-b004-bbd6be9ced19-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.388732 4747 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1f189d7-083f-4f7c-b004-bbd6be9ced19-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.388783 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.388795 4747 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c51097b-ac52-49ee-95df-440e1567be8b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.388806 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/2c51097b-ac52-49ee-95df-440e1567be8b-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.388818 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1f189d7-083f-4f7c-b004-bbd6be9ced19-config-data\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.388827 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8r54d\" (UniqueName: \"kubernetes.io/projected/e1f189d7-083f-4f7c-b004-bbd6be9ced19-kube-api-access-8r54d\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.388840 4747 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1f189d7-083f-4f7c-b004-bbd6be9ced19-logs\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.388850 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.388861 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1f189d7-083f-4f7c-b004-bbd6be9ced19-scripts\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.388872 4747 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e1f189d7-083f-4f7c-b004-bbd6be9ced19-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.410626 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.492320 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.534245 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.543817 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.548267 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.552729 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.558583 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 15 05:52:20 crc kubenswrapper[4747]: E1215 05:52:20.559043 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1f189d7-083f-4f7c-b004-bbd6be9ced19" containerName="glance-httpd" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.559063 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1f189d7-083f-4f7c-b004-bbd6be9ced19" containerName="glance-httpd" Dec 15 05:52:20 crc kubenswrapper[4747]: E1215 05:52:20.559086 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1f189d7-083f-4f7c-b004-bbd6be9ced19" containerName="glance-log" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.559095 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1f189d7-083f-4f7c-b004-bbd6be9ced19" containerName="glance-log" Dec 15 05:52:20 crc kubenswrapper[4747]: E1215 05:52:20.559111 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de019252-e11f-44f8-9f25-72b90aaa0b97" containerName="glance-log" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.559117 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="de019252-e11f-44f8-9f25-72b90aaa0b97" containerName="glance-log" Dec 15 05:52:20 crc kubenswrapper[4747]: E1215 05:52:20.559130 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de019252-e11f-44f8-9f25-72b90aaa0b97" containerName="glance-httpd" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.559136 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="de019252-e11f-44f8-9f25-72b90aaa0b97" containerName="glance-httpd" Dec 15 05:52:20 crc kubenswrapper[4747]: E1215 05:52:20.559158 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c51097b-ac52-49ee-95df-440e1567be8b" containerName="neutron-db-sync" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.559163 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c51097b-ac52-49ee-95df-440e1567be8b" containerName="neutron-db-sync" Dec 15 05:52:20 crc kubenswrapper[4747]: E1215 05:52:20.559180 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51fe6b5c-64f2-40fd-aa82-c8b615c8d198" containerName="keystone-bootstrap" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.559187 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="51fe6b5c-64f2-40fd-aa82-c8b615c8d198" containerName="keystone-bootstrap" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.559364 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="de019252-e11f-44f8-9f25-72b90aaa0b97" containerName="glance-log" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.559379 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="de019252-e11f-44f8-9f25-72b90aaa0b97" containerName="glance-httpd" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.559390 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="51fe6b5c-64f2-40fd-aa82-c8b615c8d198" containerName="keystone-bootstrap" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.559400 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1f189d7-083f-4f7c-b004-bbd6be9ced19" containerName="glance-httpd" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.559412 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1f189d7-083f-4f7c-b004-bbd6be9ced19" containerName="glance-log" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.559420 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c51097b-ac52-49ee-95df-440e1567be8b" containerName="neutron-db-sync" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.560319 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.576027 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.576304 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.576621 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.577367 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-q89ks" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.582326 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.597429 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/025696d8-212d-4b2b-bff8-87abde7b3a0b-scripts\") pod \"glance-default-external-api-0\" (UID: \"025696d8-212d-4b2b-bff8-87abde7b3a0b\") " pod="openstack/glance-default-external-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.597558 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"025696d8-212d-4b2b-bff8-87abde7b3a0b\") " pod="openstack/glance-default-external-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.597586 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/025696d8-212d-4b2b-bff8-87abde7b3a0b-logs\") pod \"glance-default-external-api-0\" (UID: \"025696d8-212d-4b2b-bff8-87abde7b3a0b\") " pod="openstack/glance-default-external-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.597647 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/025696d8-212d-4b2b-bff8-87abde7b3a0b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"025696d8-212d-4b2b-bff8-87abde7b3a0b\") " pod="openstack/glance-default-external-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.597728 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/025696d8-212d-4b2b-bff8-87abde7b3a0b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"025696d8-212d-4b2b-bff8-87abde7b3a0b\") " pod="openstack/glance-default-external-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.597775 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/025696d8-212d-4b2b-bff8-87abde7b3a0b-config-data\") pod \"glance-default-external-api-0\" (UID: \"025696d8-212d-4b2b-bff8-87abde7b3a0b\") " pod="openstack/glance-default-external-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.597825 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/025696d8-212d-4b2b-bff8-87abde7b3a0b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"025696d8-212d-4b2b-bff8-87abde7b3a0b\") " pod="openstack/glance-default-external-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.597854 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngrxf\" (UniqueName: \"kubernetes.io/projected/025696d8-212d-4b2b-bff8-87abde7b3a0b-kube-api-access-ngrxf\") pod \"glance-default-external-api-0\" (UID: \"025696d8-212d-4b2b-bff8-87abde7b3a0b\") " pod="openstack/glance-default-external-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.606478 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.609524 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.609827 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.610390 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.639747 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de019252-e11f-44f8-9f25-72b90aaa0b97" path="/var/lib/kubelet/pods/de019252-e11f-44f8-9f25-72b90aaa0b97/volumes" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.640887 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1f189d7-083f-4f7c-b004-bbd6be9ced19" path="/var/lib/kubelet/pods/e1f189d7-083f-4f7c-b004-bbd6be9ced19/volumes" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.646504 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.699308 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/025696d8-212d-4b2b-bff8-87abde7b3a0b-scripts\") pod \"glance-default-external-api-0\" (UID: \"025696d8-212d-4b2b-bff8-87abde7b3a0b\") " pod="openstack/glance-default-external-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.699406 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.699433 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.699492 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.699515 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"025696d8-212d-4b2b-bff8-87abde7b3a0b\") " pod="openstack/glance-default-external-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.699546 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/025696d8-212d-4b2b-bff8-87abde7b3a0b-logs\") pod \"glance-default-external-api-0\" (UID: \"025696d8-212d-4b2b-bff8-87abde7b3a0b\") " pod="openstack/glance-default-external-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.699566 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.699587 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.699616 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/025696d8-212d-4b2b-bff8-87abde7b3a0b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"025696d8-212d-4b2b-bff8-87abde7b3a0b\") " pod="openstack/glance-default-external-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.699691 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/025696d8-212d-4b2b-bff8-87abde7b3a0b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"025696d8-212d-4b2b-bff8-87abde7b3a0b\") " pod="openstack/glance-default-external-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.699743 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/025696d8-212d-4b2b-bff8-87abde7b3a0b-config-data\") pod \"glance-default-external-api-0\" (UID: \"025696d8-212d-4b2b-bff8-87abde7b3a0b\") " pod="openstack/glance-default-external-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.699775 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/025696d8-212d-4b2b-bff8-87abde7b3a0b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"025696d8-212d-4b2b-bff8-87abde7b3a0b\") " pod="openstack/glance-default-external-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.700404 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngrxf\" (UniqueName: \"kubernetes.io/projected/025696d8-212d-4b2b-bff8-87abde7b3a0b-kube-api-access-ngrxf\") pod \"glance-default-external-api-0\" (UID: \"025696d8-212d-4b2b-bff8-87abde7b3a0b\") " pod="openstack/glance-default-external-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.700507 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.700510 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/025696d8-212d-4b2b-bff8-87abde7b3a0b-logs\") pod \"glance-default-external-api-0\" (UID: \"025696d8-212d-4b2b-bff8-87abde7b3a0b\") " pod="openstack/glance-default-external-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.700525 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"025696d8-212d-4b2b-bff8-87abde7b3a0b\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.700425 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/025696d8-212d-4b2b-bff8-87abde7b3a0b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"025696d8-212d-4b2b-bff8-87abde7b3a0b\") " pod="openstack/glance-default-external-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.700676 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8-logs\") pod \"glance-default-internal-api-0\" (UID: \"6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.700939 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp2n9\" (UniqueName: \"kubernetes.io/projected/6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8-kube-api-access-bp2n9\") pod \"glance-default-internal-api-0\" (UID: \"6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.704463 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/025696d8-212d-4b2b-bff8-87abde7b3a0b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"025696d8-212d-4b2b-bff8-87abde7b3a0b\") " pod="openstack/glance-default-external-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.706479 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/025696d8-212d-4b2b-bff8-87abde7b3a0b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"025696d8-212d-4b2b-bff8-87abde7b3a0b\") " pod="openstack/glance-default-external-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.707082 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/025696d8-212d-4b2b-bff8-87abde7b3a0b-config-data\") pod \"glance-default-external-api-0\" (UID: \"025696d8-212d-4b2b-bff8-87abde7b3a0b\") " pod="openstack/glance-default-external-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.713304 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/025696d8-212d-4b2b-bff8-87abde7b3a0b-scripts\") pod \"glance-default-external-api-0\" (UID: \"025696d8-212d-4b2b-bff8-87abde7b3a0b\") " pod="openstack/glance-default-external-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.715959 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngrxf\" (UniqueName: \"kubernetes.io/projected/025696d8-212d-4b2b-bff8-87abde7b3a0b-kube-api-access-ngrxf\") pod \"glance-default-external-api-0\" (UID: \"025696d8-212d-4b2b-bff8-87abde7b3a0b\") " pod="openstack/glance-default-external-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.725509 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"025696d8-212d-4b2b-bff8-87abde7b3a0b\") " pod="openstack/glance-default-external-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.806962 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.807015 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8-logs\") pod \"glance-default-internal-api-0\" (UID: \"6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.807068 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp2n9\" (UniqueName: \"kubernetes.io/projected/6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8-kube-api-access-bp2n9\") pod \"glance-default-internal-api-0\" (UID: \"6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.807365 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.807399 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.807452 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.807483 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.807503 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.807716 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.808066 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8-logs\") pod \"glance-default-internal-api-0\" (UID: \"6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.809259 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.814219 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.815349 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.815722 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.816472 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.821912 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp2n9\" (UniqueName: \"kubernetes.io/projected/6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8-kube-api-access-bp2n9\") pod \"glance-default-internal-api-0\" (UID: \"6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.835405 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.896675 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 15 05:52:20 crc kubenswrapper[4747]: I1215 05:52:20.924900 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.212295 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-fd65c" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.244434 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-4nb2f"] Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.255398 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-4nb2f"] Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.318539 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-d5wlm"] Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.319714 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-d5wlm" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.322306 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.322609 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.322853 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-x9mhm" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.323073 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.323285 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.344214 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-d5wlm"] Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.396068 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5798cc97cf-fkkrr"] Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.397811 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5798cc97cf-fkkrr" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.406675 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5798cc97cf-fkkrr"] Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.419462 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e9d347b7-7c56-4a22-931e-88552ac24159-ovsdbserver-sb\") pod \"dnsmasq-dns-5798cc97cf-fkkrr\" (UID: \"e9d347b7-7c56-4a22-931e-88552ac24159\") " pod="openstack/dnsmasq-dns-5798cc97cf-fkkrr" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.419561 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9d347b7-7c56-4a22-931e-88552ac24159-config\") pod \"dnsmasq-dns-5798cc97cf-fkkrr\" (UID: \"e9d347b7-7c56-4a22-931e-88552ac24159\") " pod="openstack/dnsmasq-dns-5798cc97cf-fkkrr" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.419584 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6dlq\" (UniqueName: \"kubernetes.io/projected/a4170a18-3a02-40ea-ab35-838243909dc0-kube-api-access-l6dlq\") pod \"keystone-bootstrap-d5wlm\" (UID: \"a4170a18-3a02-40ea-ab35-838243909dc0\") " pod="openstack/keystone-bootstrap-d5wlm" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.419601 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4170a18-3a02-40ea-ab35-838243909dc0-combined-ca-bundle\") pod \"keystone-bootstrap-d5wlm\" (UID: \"a4170a18-3a02-40ea-ab35-838243909dc0\") " pod="openstack/keystone-bootstrap-d5wlm" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.419647 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e9d347b7-7c56-4a22-931e-88552ac24159-dns-swift-storage-0\") pod \"dnsmasq-dns-5798cc97cf-fkkrr\" (UID: \"e9d347b7-7c56-4a22-931e-88552ac24159\") " pod="openstack/dnsmasq-dns-5798cc97cf-fkkrr" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.419727 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a4170a18-3a02-40ea-ab35-838243909dc0-credential-keys\") pod \"keystone-bootstrap-d5wlm\" (UID: \"a4170a18-3a02-40ea-ab35-838243909dc0\") " pod="openstack/keystone-bootstrap-d5wlm" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.419764 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9d347b7-7c56-4a22-931e-88552ac24159-dns-svc\") pod \"dnsmasq-dns-5798cc97cf-fkkrr\" (UID: \"e9d347b7-7c56-4a22-931e-88552ac24159\") " pod="openstack/dnsmasq-dns-5798cc97cf-fkkrr" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.419788 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e9d347b7-7c56-4a22-931e-88552ac24159-ovsdbserver-nb\") pod \"dnsmasq-dns-5798cc97cf-fkkrr\" (UID: \"e9d347b7-7c56-4a22-931e-88552ac24159\") " pod="openstack/dnsmasq-dns-5798cc97cf-fkkrr" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.419815 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4170a18-3a02-40ea-ab35-838243909dc0-scripts\") pod \"keystone-bootstrap-d5wlm\" (UID: \"a4170a18-3a02-40ea-ab35-838243909dc0\") " pod="openstack/keystone-bootstrap-d5wlm" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.419869 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4wtc\" (UniqueName: \"kubernetes.io/projected/e9d347b7-7c56-4a22-931e-88552ac24159-kube-api-access-w4wtc\") pod \"dnsmasq-dns-5798cc97cf-fkkrr\" (UID: \"e9d347b7-7c56-4a22-931e-88552ac24159\") " pod="openstack/dnsmasq-dns-5798cc97cf-fkkrr" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.419936 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4170a18-3a02-40ea-ab35-838243909dc0-config-data\") pod \"keystone-bootstrap-d5wlm\" (UID: \"a4170a18-3a02-40ea-ab35-838243909dc0\") " pod="openstack/keystone-bootstrap-d5wlm" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.419965 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a4170a18-3a02-40ea-ab35-838243909dc0-fernet-keys\") pod \"keystone-bootstrap-d5wlm\" (UID: \"a4170a18-3a02-40ea-ab35-838243909dc0\") " pod="openstack/keystone-bootstrap-d5wlm" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.459595 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-77755588cd-rgjzw"] Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.461251 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77755588cd-rgjzw" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.466236 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-cbh44" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.466400 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.472718 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.472879 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.479487 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-77755588cd-rgjzw"] Dec 15 05:52:21 crc kubenswrapper[4747]: E1215 05:52:21.497899 4747 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2635b1f6fa95746ace0821c767d78207fe435b53a55b3a6a7b7098f90fe21e5f is running failed: container process not found" containerID="2635b1f6fa95746ace0821c767d78207fe435b53a55b3a6a7b7098f90fe21e5f" cmd=["grpc_health_probe","-addr=:50051"] Dec 15 05:52:21 crc kubenswrapper[4747]: E1215 05:52:21.498262 4747 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2635b1f6fa95746ace0821c767d78207fe435b53a55b3a6a7b7098f90fe21e5f is running failed: container process not found" containerID="2635b1f6fa95746ace0821c767d78207fe435b53a55b3a6a7b7098f90fe21e5f" cmd=["grpc_health_probe","-addr=:50051"] Dec 15 05:52:21 crc kubenswrapper[4747]: E1215 05:52:21.498459 4747 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2635b1f6fa95746ace0821c767d78207fe435b53a55b3a6a7b7098f90fe21e5f is running failed: container process not found" containerID="2635b1f6fa95746ace0821c767d78207fe435b53a55b3a6a7b7098f90fe21e5f" cmd=["grpc_health_probe","-addr=:50051"] Dec 15 05:52:21 crc kubenswrapper[4747]: E1215 05:52:21.498493 4747 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2635b1f6fa95746ace0821c767d78207fe435b53a55b3a6a7b7098f90fe21e5f is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-qs2mw" podUID="c603fa2b-48da-497f-82c0-9929a9e155a6" containerName="registry-server" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.521922 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4wtc\" (UniqueName: \"kubernetes.io/projected/e9d347b7-7c56-4a22-931e-88552ac24159-kube-api-access-w4wtc\") pod \"dnsmasq-dns-5798cc97cf-fkkrr\" (UID: \"e9d347b7-7c56-4a22-931e-88552ac24159\") " pod="openstack/dnsmasq-dns-5798cc97cf-fkkrr" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.523101 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4170a18-3a02-40ea-ab35-838243909dc0-config-data\") pod \"keystone-bootstrap-d5wlm\" (UID: \"a4170a18-3a02-40ea-ab35-838243909dc0\") " pod="openstack/keystone-bootstrap-d5wlm" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.523274 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a4170a18-3a02-40ea-ab35-838243909dc0-fernet-keys\") pod \"keystone-bootstrap-d5wlm\" (UID: \"a4170a18-3a02-40ea-ab35-838243909dc0\") " pod="openstack/keystone-bootstrap-d5wlm" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.523463 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e9d347b7-7c56-4a22-931e-88552ac24159-ovsdbserver-sb\") pod \"dnsmasq-dns-5798cc97cf-fkkrr\" (UID: \"e9d347b7-7c56-4a22-931e-88552ac24159\") " pod="openstack/dnsmasq-dns-5798cc97cf-fkkrr" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.523580 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kdxz\" (UniqueName: \"kubernetes.io/projected/b6cfb859-aec3-41c6-bb59-7e84b23396bd-kube-api-access-6kdxz\") pod \"neutron-77755588cd-rgjzw\" (UID: \"b6cfb859-aec3-41c6-bb59-7e84b23396bd\") " pod="openstack/neutron-77755588cd-rgjzw" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.523683 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b6cfb859-aec3-41c6-bb59-7e84b23396bd-config\") pod \"neutron-77755588cd-rgjzw\" (UID: \"b6cfb859-aec3-41c6-bb59-7e84b23396bd\") " pod="openstack/neutron-77755588cd-rgjzw" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.523803 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9d347b7-7c56-4a22-931e-88552ac24159-config\") pod \"dnsmasq-dns-5798cc97cf-fkkrr\" (UID: \"e9d347b7-7c56-4a22-931e-88552ac24159\") " pod="openstack/dnsmasq-dns-5798cc97cf-fkkrr" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.523889 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6dlq\" (UniqueName: \"kubernetes.io/projected/a4170a18-3a02-40ea-ab35-838243909dc0-kube-api-access-l6dlq\") pod \"keystone-bootstrap-d5wlm\" (UID: \"a4170a18-3a02-40ea-ab35-838243909dc0\") " pod="openstack/keystone-bootstrap-d5wlm" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.524008 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4170a18-3a02-40ea-ab35-838243909dc0-combined-ca-bundle\") pod \"keystone-bootstrap-d5wlm\" (UID: \"a4170a18-3a02-40ea-ab35-838243909dc0\") " pod="openstack/keystone-bootstrap-d5wlm" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.524119 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e9d347b7-7c56-4a22-931e-88552ac24159-dns-swift-storage-0\") pod \"dnsmasq-dns-5798cc97cf-fkkrr\" (UID: \"e9d347b7-7c56-4a22-931e-88552ac24159\") " pod="openstack/dnsmasq-dns-5798cc97cf-fkkrr" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.524312 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a4170a18-3a02-40ea-ab35-838243909dc0-credential-keys\") pod \"keystone-bootstrap-d5wlm\" (UID: \"a4170a18-3a02-40ea-ab35-838243909dc0\") " pod="openstack/keystone-bootstrap-d5wlm" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.524418 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6cfb859-aec3-41c6-bb59-7e84b23396bd-combined-ca-bundle\") pod \"neutron-77755588cd-rgjzw\" (UID: \"b6cfb859-aec3-41c6-bb59-7e84b23396bd\") " pod="openstack/neutron-77755588cd-rgjzw" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.524767 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9d347b7-7c56-4a22-931e-88552ac24159-dns-svc\") pod \"dnsmasq-dns-5798cc97cf-fkkrr\" (UID: \"e9d347b7-7c56-4a22-931e-88552ac24159\") " pod="openstack/dnsmasq-dns-5798cc97cf-fkkrr" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.527667 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e9d347b7-7c56-4a22-931e-88552ac24159-ovsdbserver-nb\") pod \"dnsmasq-dns-5798cc97cf-fkkrr\" (UID: \"e9d347b7-7c56-4a22-931e-88552ac24159\") " pod="openstack/dnsmasq-dns-5798cc97cf-fkkrr" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.528195 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4170a18-3a02-40ea-ab35-838243909dc0-scripts\") pod \"keystone-bootstrap-d5wlm\" (UID: \"a4170a18-3a02-40ea-ab35-838243909dc0\") " pod="openstack/keystone-bootstrap-d5wlm" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.527816 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e9d347b7-7c56-4a22-931e-88552ac24159-dns-swift-storage-0\") pod \"dnsmasq-dns-5798cc97cf-fkkrr\" (UID: \"e9d347b7-7c56-4a22-931e-88552ac24159\") " pod="openstack/dnsmasq-dns-5798cc97cf-fkkrr" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.527325 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e9d347b7-7c56-4a22-931e-88552ac24159-ovsdbserver-sb\") pod \"dnsmasq-dns-5798cc97cf-fkkrr\" (UID: \"e9d347b7-7c56-4a22-931e-88552ac24159\") " pod="openstack/dnsmasq-dns-5798cc97cf-fkkrr" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.526103 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9d347b7-7c56-4a22-931e-88552ac24159-dns-svc\") pod \"dnsmasq-dns-5798cc97cf-fkkrr\" (UID: \"e9d347b7-7c56-4a22-931e-88552ac24159\") " pod="openstack/dnsmasq-dns-5798cc97cf-fkkrr" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.529512 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e9d347b7-7c56-4a22-931e-88552ac24159-ovsdbserver-nb\") pod \"dnsmasq-dns-5798cc97cf-fkkrr\" (UID: \"e9d347b7-7c56-4a22-931e-88552ac24159\") " pod="openstack/dnsmasq-dns-5798cc97cf-fkkrr" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.529703 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b6cfb859-aec3-41c6-bb59-7e84b23396bd-httpd-config\") pod \"neutron-77755588cd-rgjzw\" (UID: \"b6cfb859-aec3-41c6-bb59-7e84b23396bd\") " pod="openstack/neutron-77755588cd-rgjzw" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.529969 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6cfb859-aec3-41c6-bb59-7e84b23396bd-ovndb-tls-certs\") pod \"neutron-77755588cd-rgjzw\" (UID: \"b6cfb859-aec3-41c6-bb59-7e84b23396bd\") " pod="openstack/neutron-77755588cd-rgjzw" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.530563 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9d347b7-7c56-4a22-931e-88552ac24159-config\") pod \"dnsmasq-dns-5798cc97cf-fkkrr\" (UID: \"e9d347b7-7c56-4a22-931e-88552ac24159\") " pod="openstack/dnsmasq-dns-5798cc97cf-fkkrr" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.531481 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a4170a18-3a02-40ea-ab35-838243909dc0-fernet-keys\") pod \"keystone-bootstrap-d5wlm\" (UID: \"a4170a18-3a02-40ea-ab35-838243909dc0\") " pod="openstack/keystone-bootstrap-d5wlm" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.531572 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4170a18-3a02-40ea-ab35-838243909dc0-config-data\") pod \"keystone-bootstrap-d5wlm\" (UID: \"a4170a18-3a02-40ea-ab35-838243909dc0\") " pod="openstack/keystone-bootstrap-d5wlm" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.531711 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4170a18-3a02-40ea-ab35-838243909dc0-combined-ca-bundle\") pod \"keystone-bootstrap-d5wlm\" (UID: \"a4170a18-3a02-40ea-ab35-838243909dc0\") " pod="openstack/keystone-bootstrap-d5wlm" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.533644 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a4170a18-3a02-40ea-ab35-838243909dc0-credential-keys\") pod \"keystone-bootstrap-d5wlm\" (UID: \"a4170a18-3a02-40ea-ab35-838243909dc0\") " pod="openstack/keystone-bootstrap-d5wlm" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.533688 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4170a18-3a02-40ea-ab35-838243909dc0-scripts\") pod \"keystone-bootstrap-d5wlm\" (UID: \"a4170a18-3a02-40ea-ab35-838243909dc0\") " pod="openstack/keystone-bootstrap-d5wlm" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.540823 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6dlq\" (UniqueName: \"kubernetes.io/projected/a4170a18-3a02-40ea-ab35-838243909dc0-kube-api-access-l6dlq\") pod \"keystone-bootstrap-d5wlm\" (UID: \"a4170a18-3a02-40ea-ab35-838243909dc0\") " pod="openstack/keystone-bootstrap-d5wlm" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.541608 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4wtc\" (UniqueName: \"kubernetes.io/projected/e9d347b7-7c56-4a22-931e-88552ac24159-kube-api-access-w4wtc\") pod \"dnsmasq-dns-5798cc97cf-fkkrr\" (UID: \"e9d347b7-7c56-4a22-931e-88552ac24159\") " pod="openstack/dnsmasq-dns-5798cc97cf-fkkrr" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.631735 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6cfb859-aec3-41c6-bb59-7e84b23396bd-combined-ca-bundle\") pod \"neutron-77755588cd-rgjzw\" (UID: \"b6cfb859-aec3-41c6-bb59-7e84b23396bd\") " pod="openstack/neutron-77755588cd-rgjzw" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.631834 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b6cfb859-aec3-41c6-bb59-7e84b23396bd-httpd-config\") pod \"neutron-77755588cd-rgjzw\" (UID: \"b6cfb859-aec3-41c6-bb59-7e84b23396bd\") " pod="openstack/neutron-77755588cd-rgjzw" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.631868 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6cfb859-aec3-41c6-bb59-7e84b23396bd-ovndb-tls-certs\") pod \"neutron-77755588cd-rgjzw\" (UID: \"b6cfb859-aec3-41c6-bb59-7e84b23396bd\") " pod="openstack/neutron-77755588cd-rgjzw" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.632055 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kdxz\" (UniqueName: \"kubernetes.io/projected/b6cfb859-aec3-41c6-bb59-7e84b23396bd-kube-api-access-6kdxz\") pod \"neutron-77755588cd-rgjzw\" (UID: \"b6cfb859-aec3-41c6-bb59-7e84b23396bd\") " pod="openstack/neutron-77755588cd-rgjzw" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.632094 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b6cfb859-aec3-41c6-bb59-7e84b23396bd-config\") pod \"neutron-77755588cd-rgjzw\" (UID: \"b6cfb859-aec3-41c6-bb59-7e84b23396bd\") " pod="openstack/neutron-77755588cd-rgjzw" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.635098 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6cfb859-aec3-41c6-bb59-7e84b23396bd-combined-ca-bundle\") pod \"neutron-77755588cd-rgjzw\" (UID: \"b6cfb859-aec3-41c6-bb59-7e84b23396bd\") " pod="openstack/neutron-77755588cd-rgjzw" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.636514 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b6cfb859-aec3-41c6-bb59-7e84b23396bd-httpd-config\") pod \"neutron-77755588cd-rgjzw\" (UID: \"b6cfb859-aec3-41c6-bb59-7e84b23396bd\") " pod="openstack/neutron-77755588cd-rgjzw" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.636599 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6cfb859-aec3-41c6-bb59-7e84b23396bd-ovndb-tls-certs\") pod \"neutron-77755588cd-rgjzw\" (UID: \"b6cfb859-aec3-41c6-bb59-7e84b23396bd\") " pod="openstack/neutron-77755588cd-rgjzw" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.640200 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-d5wlm" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.640680 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b6cfb859-aec3-41c6-bb59-7e84b23396bd-config\") pod \"neutron-77755588cd-rgjzw\" (UID: \"b6cfb859-aec3-41c6-bb59-7e84b23396bd\") " pod="openstack/neutron-77755588cd-rgjzw" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.656412 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kdxz\" (UniqueName: \"kubernetes.io/projected/b6cfb859-aec3-41c6-bb59-7e84b23396bd-kube-api-access-6kdxz\") pod \"neutron-77755588cd-rgjzw\" (UID: \"b6cfb859-aec3-41c6-bb59-7e84b23396bd\") " pod="openstack/neutron-77755588cd-rgjzw" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.727771 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5798cc97cf-fkkrr" Dec 15 05:52:21 crc kubenswrapper[4747]: I1215 05:52:21.786209 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77755588cd-rgjzw" Dec 15 05:52:22 crc kubenswrapper[4747]: I1215 05:52:22.638584 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51fe6b5c-64f2-40fd-aa82-c8b615c8d198" path="/var/lib/kubelet/pods/51fe6b5c-64f2-40fd-aa82-c8b615c8d198/volumes" Dec 15 05:52:23 crc kubenswrapper[4747]: I1215 05:52:23.305877 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7b9f9565dc-vlcmk"] Dec 15 05:52:23 crc kubenswrapper[4747]: I1215 05:52:23.307694 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b9f9565dc-vlcmk" Dec 15 05:52:23 crc kubenswrapper[4747]: I1215 05:52:23.310265 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Dec 15 05:52:23 crc kubenswrapper[4747]: I1215 05:52:23.312019 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Dec 15 05:52:23 crc kubenswrapper[4747]: I1215 05:52:23.326599 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7b9f9565dc-vlcmk"] Dec 15 05:52:23 crc kubenswrapper[4747]: I1215 05:52:23.380301 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e5a51e1-29fc-4fe7-b6d4-5a3227a93ec7-public-tls-certs\") pod \"neutron-7b9f9565dc-vlcmk\" (UID: \"8e5a51e1-29fc-4fe7-b6d4-5a3227a93ec7\") " pod="openstack/neutron-7b9f9565dc-vlcmk" Dec 15 05:52:23 crc kubenswrapper[4747]: I1215 05:52:23.380462 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8e5a51e1-29fc-4fe7-b6d4-5a3227a93ec7-config\") pod \"neutron-7b9f9565dc-vlcmk\" (UID: \"8e5a51e1-29fc-4fe7-b6d4-5a3227a93ec7\") " pod="openstack/neutron-7b9f9565dc-vlcmk" Dec 15 05:52:23 crc kubenswrapper[4747]: I1215 05:52:23.380507 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e5a51e1-29fc-4fe7-b6d4-5a3227a93ec7-ovndb-tls-certs\") pod \"neutron-7b9f9565dc-vlcmk\" (UID: \"8e5a51e1-29fc-4fe7-b6d4-5a3227a93ec7\") " pod="openstack/neutron-7b9f9565dc-vlcmk" Dec 15 05:52:23 crc kubenswrapper[4747]: I1215 05:52:23.380610 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mccs\" (UniqueName: \"kubernetes.io/projected/8e5a51e1-29fc-4fe7-b6d4-5a3227a93ec7-kube-api-access-8mccs\") pod \"neutron-7b9f9565dc-vlcmk\" (UID: \"8e5a51e1-29fc-4fe7-b6d4-5a3227a93ec7\") " pod="openstack/neutron-7b9f9565dc-vlcmk" Dec 15 05:52:23 crc kubenswrapper[4747]: I1215 05:52:23.380648 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e5a51e1-29fc-4fe7-b6d4-5a3227a93ec7-combined-ca-bundle\") pod \"neutron-7b9f9565dc-vlcmk\" (UID: \"8e5a51e1-29fc-4fe7-b6d4-5a3227a93ec7\") " pod="openstack/neutron-7b9f9565dc-vlcmk" Dec 15 05:52:23 crc kubenswrapper[4747]: I1215 05:52:23.380764 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8e5a51e1-29fc-4fe7-b6d4-5a3227a93ec7-httpd-config\") pod \"neutron-7b9f9565dc-vlcmk\" (UID: \"8e5a51e1-29fc-4fe7-b6d4-5a3227a93ec7\") " pod="openstack/neutron-7b9f9565dc-vlcmk" Dec 15 05:52:23 crc kubenswrapper[4747]: I1215 05:52:23.380825 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e5a51e1-29fc-4fe7-b6d4-5a3227a93ec7-internal-tls-certs\") pod \"neutron-7b9f9565dc-vlcmk\" (UID: \"8e5a51e1-29fc-4fe7-b6d4-5a3227a93ec7\") " pod="openstack/neutron-7b9f9565dc-vlcmk" Dec 15 05:52:23 crc kubenswrapper[4747]: I1215 05:52:23.482194 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8e5a51e1-29fc-4fe7-b6d4-5a3227a93ec7-httpd-config\") pod \"neutron-7b9f9565dc-vlcmk\" (UID: \"8e5a51e1-29fc-4fe7-b6d4-5a3227a93ec7\") " pod="openstack/neutron-7b9f9565dc-vlcmk" Dec 15 05:52:23 crc kubenswrapper[4747]: I1215 05:52:23.482261 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e5a51e1-29fc-4fe7-b6d4-5a3227a93ec7-internal-tls-certs\") pod \"neutron-7b9f9565dc-vlcmk\" (UID: \"8e5a51e1-29fc-4fe7-b6d4-5a3227a93ec7\") " pod="openstack/neutron-7b9f9565dc-vlcmk" Dec 15 05:52:23 crc kubenswrapper[4747]: I1215 05:52:23.482345 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e5a51e1-29fc-4fe7-b6d4-5a3227a93ec7-public-tls-certs\") pod \"neutron-7b9f9565dc-vlcmk\" (UID: \"8e5a51e1-29fc-4fe7-b6d4-5a3227a93ec7\") " pod="openstack/neutron-7b9f9565dc-vlcmk" Dec 15 05:52:23 crc kubenswrapper[4747]: I1215 05:52:23.482449 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8e5a51e1-29fc-4fe7-b6d4-5a3227a93ec7-config\") pod \"neutron-7b9f9565dc-vlcmk\" (UID: \"8e5a51e1-29fc-4fe7-b6d4-5a3227a93ec7\") " pod="openstack/neutron-7b9f9565dc-vlcmk" Dec 15 05:52:23 crc kubenswrapper[4747]: I1215 05:52:23.482472 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e5a51e1-29fc-4fe7-b6d4-5a3227a93ec7-ovndb-tls-certs\") pod \"neutron-7b9f9565dc-vlcmk\" (UID: \"8e5a51e1-29fc-4fe7-b6d4-5a3227a93ec7\") " pod="openstack/neutron-7b9f9565dc-vlcmk" Dec 15 05:52:23 crc kubenswrapper[4747]: I1215 05:52:23.482544 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mccs\" (UniqueName: \"kubernetes.io/projected/8e5a51e1-29fc-4fe7-b6d4-5a3227a93ec7-kube-api-access-8mccs\") pod \"neutron-7b9f9565dc-vlcmk\" (UID: \"8e5a51e1-29fc-4fe7-b6d4-5a3227a93ec7\") " pod="openstack/neutron-7b9f9565dc-vlcmk" Dec 15 05:52:23 crc kubenswrapper[4747]: I1215 05:52:23.482577 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e5a51e1-29fc-4fe7-b6d4-5a3227a93ec7-combined-ca-bundle\") pod \"neutron-7b9f9565dc-vlcmk\" (UID: \"8e5a51e1-29fc-4fe7-b6d4-5a3227a93ec7\") " pod="openstack/neutron-7b9f9565dc-vlcmk" Dec 15 05:52:23 crc kubenswrapper[4747]: I1215 05:52:23.489571 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e5a51e1-29fc-4fe7-b6d4-5a3227a93ec7-internal-tls-certs\") pod \"neutron-7b9f9565dc-vlcmk\" (UID: \"8e5a51e1-29fc-4fe7-b6d4-5a3227a93ec7\") " pod="openstack/neutron-7b9f9565dc-vlcmk" Dec 15 05:52:23 crc kubenswrapper[4747]: I1215 05:52:23.493208 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e5a51e1-29fc-4fe7-b6d4-5a3227a93ec7-combined-ca-bundle\") pod \"neutron-7b9f9565dc-vlcmk\" (UID: \"8e5a51e1-29fc-4fe7-b6d4-5a3227a93ec7\") " pod="openstack/neutron-7b9f9565dc-vlcmk" Dec 15 05:52:23 crc kubenswrapper[4747]: I1215 05:52:23.494439 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e5a51e1-29fc-4fe7-b6d4-5a3227a93ec7-ovndb-tls-certs\") pod \"neutron-7b9f9565dc-vlcmk\" (UID: \"8e5a51e1-29fc-4fe7-b6d4-5a3227a93ec7\") " pod="openstack/neutron-7b9f9565dc-vlcmk" Dec 15 05:52:23 crc kubenswrapper[4747]: I1215 05:52:23.502253 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8e5a51e1-29fc-4fe7-b6d4-5a3227a93ec7-config\") pod \"neutron-7b9f9565dc-vlcmk\" (UID: \"8e5a51e1-29fc-4fe7-b6d4-5a3227a93ec7\") " pod="openstack/neutron-7b9f9565dc-vlcmk" Dec 15 05:52:23 crc kubenswrapper[4747]: I1215 05:52:23.505452 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8e5a51e1-29fc-4fe7-b6d4-5a3227a93ec7-httpd-config\") pod \"neutron-7b9f9565dc-vlcmk\" (UID: \"8e5a51e1-29fc-4fe7-b6d4-5a3227a93ec7\") " pod="openstack/neutron-7b9f9565dc-vlcmk" Dec 15 05:52:23 crc kubenswrapper[4747]: I1215 05:52:23.506482 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mccs\" (UniqueName: \"kubernetes.io/projected/8e5a51e1-29fc-4fe7-b6d4-5a3227a93ec7-kube-api-access-8mccs\") pod \"neutron-7b9f9565dc-vlcmk\" (UID: \"8e5a51e1-29fc-4fe7-b6d4-5a3227a93ec7\") " pod="openstack/neutron-7b9f9565dc-vlcmk" Dec 15 05:52:23 crc kubenswrapper[4747]: I1215 05:52:23.509651 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e5a51e1-29fc-4fe7-b6d4-5a3227a93ec7-public-tls-certs\") pod \"neutron-7b9f9565dc-vlcmk\" (UID: \"8e5a51e1-29fc-4fe7-b6d4-5a3227a93ec7\") " pod="openstack/neutron-7b9f9565dc-vlcmk" Dec 15 05:52:23 crc kubenswrapper[4747]: I1215 05:52:23.628111 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b9f9565dc-vlcmk" Dec 15 05:52:26 crc kubenswrapper[4747]: I1215 05:52:26.758033 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-67fb8b8965-6vtwl" podUID="bade9597-335c-43a4-9477-ab4f08999fa8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.107:5353: i/o timeout" Dec 15 05:52:29 crc kubenswrapper[4747]: I1215 05:52:29.519146 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fb8b8965-6vtwl" Dec 15 05:52:29 crc kubenswrapper[4747]: I1215 05:52:29.524343 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qs2mw" Dec 15 05:52:29 crc kubenswrapper[4747]: I1215 05:52:29.711499 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bade9597-335c-43a4-9477-ab4f08999fa8-config\") pod \"bade9597-335c-43a4-9477-ab4f08999fa8\" (UID: \"bade9597-335c-43a4-9477-ab4f08999fa8\") " Dec 15 05:52:29 crc kubenswrapper[4747]: I1215 05:52:29.711616 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c603fa2b-48da-497f-82c0-9929a9e155a6-utilities\") pod \"c603fa2b-48da-497f-82c0-9929a9e155a6\" (UID: \"c603fa2b-48da-497f-82c0-9929a9e155a6\") " Dec 15 05:52:29 crc kubenswrapper[4747]: I1215 05:52:29.711659 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bade9597-335c-43a4-9477-ab4f08999fa8-ovsdbserver-sb\") pod \"bade9597-335c-43a4-9477-ab4f08999fa8\" (UID: \"bade9597-335c-43a4-9477-ab4f08999fa8\") " Dec 15 05:52:29 crc kubenswrapper[4747]: I1215 05:52:29.711841 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tt89\" (UniqueName: \"kubernetes.io/projected/bade9597-335c-43a4-9477-ab4f08999fa8-kube-api-access-4tt89\") pod \"bade9597-335c-43a4-9477-ab4f08999fa8\" (UID: \"bade9597-335c-43a4-9477-ab4f08999fa8\") " Dec 15 05:52:29 crc kubenswrapper[4747]: I1215 05:52:29.711888 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c603fa2b-48da-497f-82c0-9929a9e155a6-catalog-content\") pod \"c603fa2b-48da-497f-82c0-9929a9e155a6\" (UID: \"c603fa2b-48da-497f-82c0-9929a9e155a6\") " Dec 15 05:52:29 crc kubenswrapper[4747]: I1215 05:52:29.711947 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bade9597-335c-43a4-9477-ab4f08999fa8-ovsdbserver-nb\") pod \"bade9597-335c-43a4-9477-ab4f08999fa8\" (UID: \"bade9597-335c-43a4-9477-ab4f08999fa8\") " Dec 15 05:52:29 crc kubenswrapper[4747]: I1215 05:52:29.712001 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bade9597-335c-43a4-9477-ab4f08999fa8-dns-svc\") pod \"bade9597-335c-43a4-9477-ab4f08999fa8\" (UID: \"bade9597-335c-43a4-9477-ab4f08999fa8\") " Dec 15 05:52:29 crc kubenswrapper[4747]: I1215 05:52:29.712028 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4s6n\" (UniqueName: \"kubernetes.io/projected/c603fa2b-48da-497f-82c0-9929a9e155a6-kube-api-access-r4s6n\") pod \"c603fa2b-48da-497f-82c0-9929a9e155a6\" (UID: \"c603fa2b-48da-497f-82c0-9929a9e155a6\") " Dec 15 05:52:29 crc kubenswrapper[4747]: I1215 05:52:29.712445 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c603fa2b-48da-497f-82c0-9929a9e155a6-utilities" (OuterVolumeSpecName: "utilities") pod "c603fa2b-48da-497f-82c0-9929a9e155a6" (UID: "c603fa2b-48da-497f-82c0-9929a9e155a6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:52:29 crc kubenswrapper[4747]: I1215 05:52:29.713152 4747 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c603fa2b-48da-497f-82c0-9929a9e155a6-utilities\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:29 crc kubenswrapper[4747]: I1215 05:52:29.716584 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bade9597-335c-43a4-9477-ab4f08999fa8-kube-api-access-4tt89" (OuterVolumeSpecName: "kube-api-access-4tt89") pod "bade9597-335c-43a4-9477-ab4f08999fa8" (UID: "bade9597-335c-43a4-9477-ab4f08999fa8"). InnerVolumeSpecName "kube-api-access-4tt89". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:52:29 crc kubenswrapper[4747]: I1215 05:52:29.722679 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c603fa2b-48da-497f-82c0-9929a9e155a6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c603fa2b-48da-497f-82c0-9929a9e155a6" (UID: "c603fa2b-48da-497f-82c0-9929a9e155a6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:52:29 crc kubenswrapper[4747]: I1215 05:52:29.724364 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c603fa2b-48da-497f-82c0-9929a9e155a6-kube-api-access-r4s6n" (OuterVolumeSpecName: "kube-api-access-r4s6n") pod "c603fa2b-48da-497f-82c0-9929a9e155a6" (UID: "c603fa2b-48da-497f-82c0-9929a9e155a6"). InnerVolumeSpecName "kube-api-access-r4s6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:52:29 crc kubenswrapper[4747]: I1215 05:52:29.746347 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bade9597-335c-43a4-9477-ab4f08999fa8-config" (OuterVolumeSpecName: "config") pod "bade9597-335c-43a4-9477-ab4f08999fa8" (UID: "bade9597-335c-43a4-9477-ab4f08999fa8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:52:29 crc kubenswrapper[4747]: I1215 05:52:29.748246 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bade9597-335c-43a4-9477-ab4f08999fa8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bade9597-335c-43a4-9477-ab4f08999fa8" (UID: "bade9597-335c-43a4-9477-ab4f08999fa8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:52:29 crc kubenswrapper[4747]: I1215 05:52:29.751527 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bade9597-335c-43a4-9477-ab4f08999fa8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bade9597-335c-43a4-9477-ab4f08999fa8" (UID: "bade9597-335c-43a4-9477-ab4f08999fa8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:52:29 crc kubenswrapper[4747]: I1215 05:52:29.759719 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bade9597-335c-43a4-9477-ab4f08999fa8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bade9597-335c-43a4-9477-ab4f08999fa8" (UID: "bade9597-335c-43a4-9477-ab4f08999fa8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:52:29 crc kubenswrapper[4747]: I1215 05:52:29.815636 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bade9597-335c-43a4-9477-ab4f08999fa8-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:29 crc kubenswrapper[4747]: I1215 05:52:29.815666 4747 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bade9597-335c-43a4-9477-ab4f08999fa8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:29 crc kubenswrapper[4747]: I1215 05:52:29.815706 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tt89\" (UniqueName: \"kubernetes.io/projected/bade9597-335c-43a4-9477-ab4f08999fa8-kube-api-access-4tt89\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:29 crc kubenswrapper[4747]: I1215 05:52:29.815716 4747 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c603fa2b-48da-497f-82c0-9929a9e155a6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:29 crc kubenswrapper[4747]: I1215 05:52:29.815726 4747 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bade9597-335c-43a4-9477-ab4f08999fa8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:29 crc kubenswrapper[4747]: I1215 05:52:29.815734 4747 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bade9597-335c-43a4-9477-ab4f08999fa8-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:29 crc kubenswrapper[4747]: I1215 05:52:29.815742 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4s6n\" (UniqueName: \"kubernetes.io/projected/c603fa2b-48da-497f-82c0-9929a9e155a6-kube-api-access-r4s6n\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:29 crc kubenswrapper[4747]: I1215 05:52:29.821769 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-25lpq"] Dec 15 05:52:30 crc kubenswrapper[4747]: I1215 05:52:30.304500 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fb8b8965-6vtwl" event={"ID":"bade9597-335c-43a4-9477-ab4f08999fa8","Type":"ContainerDied","Data":"03644ad93afaa3dd9ce7257f50b3b316cef34cb35104dbf4127b5f663cb61414"} Dec 15 05:52:30 crc kubenswrapper[4747]: I1215 05:52:30.304547 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fb8b8965-6vtwl" Dec 15 05:52:30 crc kubenswrapper[4747]: I1215 05:52:30.311990 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qs2mw" event={"ID":"c603fa2b-48da-497f-82c0-9929a9e155a6","Type":"ContainerDied","Data":"54d1e259eb4a473fb06b074f3bb757d027c8eab94c7fd280cd23d4a61ec2a8aa"} Dec 15 05:52:30 crc kubenswrapper[4747]: I1215 05:52:30.312060 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qs2mw" Dec 15 05:52:30 crc kubenswrapper[4747]: I1215 05:52:30.350750 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67fb8b8965-6vtwl"] Dec 15 05:52:30 crc kubenswrapper[4747]: I1215 05:52:30.356990 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67fb8b8965-6vtwl"] Dec 15 05:52:30 crc kubenswrapper[4747]: I1215 05:52:30.360224 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qs2mw"] Dec 15 05:52:30 crc kubenswrapper[4747]: I1215 05:52:30.365304 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qs2mw"] Dec 15 05:52:30 crc kubenswrapper[4747]: I1215 05:52:30.468190 4747 scope.go:117] "RemoveContainer" containerID="32f72745f0f98e0da1d41634a860dfaf6b0c207643255b9d261896f913318081" Dec 15 05:52:30 crc kubenswrapper[4747]: E1215 05:52:30.483187 4747 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos9/openstack-cinder-api:2e38c527ddf6e767040136ecf014e7b9" Dec 15 05:52:30 crc kubenswrapper[4747]: E1215 05:52:30.483427 4747 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos9/openstack-cinder-api:2e38c527ddf6e767040136ecf014e7b9" Dec 15 05:52:30 crc kubenswrapper[4747]: E1215 05:52:30.483547 4747 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.rdoproject.org/podified-master-centos9/openstack-cinder-api:2e38c527ddf6e767040136ecf014e7b9,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4ctpf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-4hlv6_openstack(2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 15 05:52:30 crc kubenswrapper[4747]: E1215 05:52:30.485204 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-4hlv6" podUID="2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6" Dec 15 05:52:30 crc kubenswrapper[4747]: W1215 05:52:30.539440 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod656ef4f9_8e82_43ce_b0f9_b654bcecb12a.slice/crio-d615cc1eab2d41d6c377647054e79ac3ebc09a0676a36bdb9680715b498ad64f WatchSource:0}: Error finding container d615cc1eab2d41d6c377647054e79ac3ebc09a0676a36bdb9680715b498ad64f: Status 404 returned error can't find the container with id d615cc1eab2d41d6c377647054e79ac3ebc09a0676a36bdb9680715b498ad64f Dec 15 05:52:30 crc kubenswrapper[4747]: I1215 05:52:30.626109 4747 scope.go:117] "RemoveContainer" containerID="1e1076ae4c095fb9b56fc9d72549bf653a0c5aa0bd38748c5df9428a48fa5aff" Dec 15 05:52:30 crc kubenswrapper[4747]: I1215 05:52:30.689915 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bade9597-335c-43a4-9477-ab4f08999fa8" path="/var/lib/kubelet/pods/bade9597-335c-43a4-9477-ab4f08999fa8/volumes" Dec 15 05:52:30 crc kubenswrapper[4747]: I1215 05:52:30.692152 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c603fa2b-48da-497f-82c0-9929a9e155a6" path="/var/lib/kubelet/pods/c603fa2b-48da-497f-82c0-9929a9e155a6/volumes" Dec 15 05:52:30 crc kubenswrapper[4747]: I1215 05:52:30.741013 4747 scope.go:117] "RemoveContainer" containerID="6f2213bfcc90b38e2a425166186e09261933b5678a4fe53155368f30ab3b3c9d" Dec 15 05:52:30 crc kubenswrapper[4747]: I1215 05:52:30.823581 4747 scope.go:117] "RemoveContainer" containerID="f52a5430d4461fd55c11ee7a0137c08c01ef2af7d98f22774a239b372cbabebd" Dec 15 05:52:30 crc kubenswrapper[4747]: I1215 05:52:30.903228 4747 scope.go:117] "RemoveContainer" containerID="568389ca60b28559ad7842ff649954380a7d61c626dd8eb971c620e30c076956" Dec 15 05:52:30 crc kubenswrapper[4747]: I1215 05:52:30.942736 4747 scope.go:117] "RemoveContainer" containerID="2635b1f6fa95746ace0821c767d78207fe435b53a55b3a6a7b7098f90fe21e5f" Dec 15 05:52:31 crc kubenswrapper[4747]: I1215 05:52:31.048017 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5798cc97cf-fkkrr"] Dec 15 05:52:31 crc kubenswrapper[4747]: I1215 05:52:31.055991 4747 scope.go:117] "RemoveContainer" containerID="96764e5cb0ee9e5dc6a769849b8b4a6c1902d310dd77825ee1d4e43451750591" Dec 15 05:52:31 crc kubenswrapper[4747]: I1215 05:52:31.162213 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5tm7d"] Dec 15 05:52:31 crc kubenswrapper[4747]: I1215 05:52:31.162298 4747 scope.go:117] "RemoveContainer" containerID="2389af3df909f6df9adbcfd83fcf3e2d233def84c20f022f146bcc8143e3f915" Dec 15 05:52:31 crc kubenswrapper[4747]: I1215 05:52:31.241986 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-d5wlm"] Dec 15 05:52:31 crc kubenswrapper[4747]: I1215 05:52:31.352708 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-d5wlm" event={"ID":"a4170a18-3a02-40ea-ab35-838243909dc0","Type":"ContainerStarted","Data":"e396d352c98b8afb853b5cb449d0abd5e2dc59221c1af135adfdd6ae512f0108"} Dec 15 05:52:31 crc kubenswrapper[4747]: I1215 05:52:31.353998 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tm7d" event={"ID":"7aa44578-a974-4c1f-90db-014ecf544678","Type":"ContainerStarted","Data":"8c6843bd23ea75a562c8a3bfcfada602d25bfd8807bd52821633bee358f5b150"} Dec 15 05:52:31 crc kubenswrapper[4747]: I1215 05:52:31.373110 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-n8z94" event={"ID":"f285358e-df22-44d4-b3b4-5a2dc69399c6","Type":"ContainerStarted","Data":"a4e4c19ca50e70674267038dcfd17e451fb5cb7c03e62689bc0f274c534f4aa5"} Dec 15 05:52:31 crc kubenswrapper[4747]: I1215 05:52:31.390206 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"873188a0-9dbb-4c95-b39e-cd503e07e59f","Type":"ContainerStarted","Data":"8dc5ef3d40e3025d26d6edb7c1e9ee191c80cb1f496fb2b91f4aed5be6317712"} Dec 15 05:52:31 crc kubenswrapper[4747]: I1215 05:52:31.398480 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5798cc97cf-fkkrr" event={"ID":"e9d347b7-7c56-4a22-931e-88552ac24159","Type":"ContainerStarted","Data":"4842b76d8c4a50f7174b6b0672eb5ec7ca96240eeddfce8fee6de50fc6cac59d"} Dec 15 05:52:31 crc kubenswrapper[4747]: I1215 05:52:31.398495 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-n8z94" podStartSLOduration=2.285737466 podStartE2EDuration="26.398463275s" podCreationTimestamp="2025-12-15 05:52:05 +0000 UTC" firstStartedPulling="2025-12-15 05:52:06.335668012 +0000 UTC m=+890.032179929" lastFinishedPulling="2025-12-15 05:52:30.448393821 +0000 UTC m=+914.144905738" observedRunningTime="2025-12-15 05:52:31.390292309 +0000 UTC m=+915.086804226" watchObservedRunningTime="2025-12-15 05:52:31.398463275 +0000 UTC m=+915.094975192" Dec 15 05:52:31 crc kubenswrapper[4747]: I1215 05:52:31.400107 4747 generic.go:334] "Generic (PLEG): container finished" podID="656ef4f9-8e82-43ce-b0f9-b654bcecb12a" containerID="78644837d62eade3980ba2fac1f320a2ff11b1f64ee916f7110386745b8a679c" exitCode=0 Dec 15 05:52:31 crc kubenswrapper[4747]: I1215 05:52:31.400158 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25lpq" event={"ID":"656ef4f9-8e82-43ce-b0f9-b654bcecb12a","Type":"ContainerDied","Data":"78644837d62eade3980ba2fac1f320a2ff11b1f64ee916f7110386745b8a679c"} Dec 15 05:52:31 crc kubenswrapper[4747]: I1215 05:52:31.400187 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25lpq" event={"ID":"656ef4f9-8e82-43ce-b0f9-b654bcecb12a","Type":"ContainerStarted","Data":"d615cc1eab2d41d6c377647054e79ac3ebc09a0676a36bdb9680715b498ad64f"} Dec 15 05:52:31 crc kubenswrapper[4747]: I1215 05:52:31.412296 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-zsd2g" event={"ID":"7acaca59-7888-4a05-8eb3-f925f2f8d44b","Type":"ContainerStarted","Data":"d75b321f864f0dbe87288c0553a53f18aefaae1564116ae1192de1a875b1715b"} Dec 15 05:52:31 crc kubenswrapper[4747]: I1215 05:52:31.421835 4747 generic.go:334] "Generic (PLEG): container finished" podID="e8515fd4-ce21-4f89-a703-e3807fa6fd90" containerID="be2b4f80044db2a33a65ac2700a43fb24fe9df3d40b1aae0c63903743706a042" exitCode=0 Dec 15 05:52:31 crc kubenswrapper[4747]: I1215 05:52:31.422000 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84m7w" event={"ID":"e8515fd4-ce21-4f89-a703-e3807fa6fd90","Type":"ContainerDied","Data":"be2b4f80044db2a33a65ac2700a43fb24fe9df3d40b1aae0c63903743706a042"} Dec 15 05:52:31 crc kubenswrapper[4747]: E1215 05:52:31.424548 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos9/openstack-cinder-api:2e38c527ddf6e767040136ecf014e7b9\\\"\"" pod="openstack/cinder-db-sync-4hlv6" podUID="2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6" Dec 15 05:52:31 crc kubenswrapper[4747]: I1215 05:52:31.448459 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-zsd2g" podStartSLOduration=2.216735218 podStartE2EDuration="26.448439511s" podCreationTimestamp="2025-12-15 05:52:05 +0000 UTC" firstStartedPulling="2025-12-15 05:52:06.335171959 +0000 UTC m=+890.031683876" lastFinishedPulling="2025-12-15 05:52:30.566876251 +0000 UTC m=+914.263388169" observedRunningTime="2025-12-15 05:52:31.442846214 +0000 UTC m=+915.139358131" watchObservedRunningTime="2025-12-15 05:52:31.448439511 +0000 UTC m=+915.144951429" Dec 15 05:52:31 crc kubenswrapper[4747]: I1215 05:52:31.458352 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 15 05:52:31 crc kubenswrapper[4747]: I1215 05:52:31.758846 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-67fb8b8965-6vtwl" podUID="bade9597-335c-43a4-9477-ab4f08999fa8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.107:5353: i/o timeout" Dec 15 05:52:32 crc kubenswrapper[4747]: I1215 05:52:32.151524 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 15 05:52:32 crc kubenswrapper[4747]: I1215 05:52:32.325107 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-77755588cd-rgjzw"] Dec 15 05:52:32 crc kubenswrapper[4747]: I1215 05:52:32.406860 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7b9f9565dc-vlcmk"] Dec 15 05:52:32 crc kubenswrapper[4747]: I1215 05:52:32.438417 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"025696d8-212d-4b2b-bff8-87abde7b3a0b","Type":"ContainerStarted","Data":"d991ef00573efcbdd8dd0d4b061264a9a748353813472f8e334fe651b644f113"} Dec 15 05:52:32 crc kubenswrapper[4747]: I1215 05:52:32.438471 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"025696d8-212d-4b2b-bff8-87abde7b3a0b","Type":"ContainerStarted","Data":"856c16ed11c7910eaaedc1c5d99fbf7ec842a60f20cc53a89f41b6e37f1d335b"} Dec 15 05:52:32 crc kubenswrapper[4747]: I1215 05:52:32.441346 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-d5wlm" event={"ID":"a4170a18-3a02-40ea-ab35-838243909dc0","Type":"ContainerStarted","Data":"30090006726f4f8f2e101e709cace0d7149cb9395ae58234657fe2d8507ecf04"} Dec 15 05:52:32 crc kubenswrapper[4747]: I1215 05:52:32.446296 4747 generic.go:334] "Generic (PLEG): container finished" podID="7aa44578-a974-4c1f-90db-014ecf544678" containerID="a8955a242e19efdbabedafcfb8de22160a828e444db40429a0dce2745a98fff3" exitCode=0 Dec 15 05:52:32 crc kubenswrapper[4747]: I1215 05:52:32.446344 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tm7d" event={"ID":"7aa44578-a974-4c1f-90db-014ecf544678","Type":"ContainerDied","Data":"a8955a242e19efdbabedafcfb8de22160a828e444db40429a0dce2745a98fff3"} Dec 15 05:52:32 crc kubenswrapper[4747]: I1215 05:52:32.452365 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8","Type":"ContainerStarted","Data":"d6c040abb4a985552a3d46a35c9bcff5c4c3c8fe2fed47690586d0a179ff3a7e"} Dec 15 05:52:32 crc kubenswrapper[4747]: I1215 05:52:32.458601 4747 generic.go:334] "Generic (PLEG): container finished" podID="e9d347b7-7c56-4a22-931e-88552ac24159" containerID="f6a226bb3d7cecb5bb08753332040466d07d68a1efa5b7c8ffdfbc5744144c49" exitCode=0 Dec 15 05:52:32 crc kubenswrapper[4747]: I1215 05:52:32.459151 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5798cc97cf-fkkrr" event={"ID":"e9d347b7-7c56-4a22-931e-88552ac24159","Type":"ContainerDied","Data":"f6a226bb3d7cecb5bb08753332040466d07d68a1efa5b7c8ffdfbc5744144c49"} Dec 15 05:52:32 crc kubenswrapper[4747]: I1215 05:52:32.465770 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77755588cd-rgjzw" event={"ID":"b6cfb859-aec3-41c6-bb59-7e84b23396bd","Type":"ContainerStarted","Data":"22437902654a92043637d8cbcabfea6f2dc3a45f5052e61cb0bdd0607178d952"} Dec 15 05:52:32 crc kubenswrapper[4747]: I1215 05:52:32.470166 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-d5wlm" podStartSLOduration=11.470148082 podStartE2EDuration="11.470148082s" podCreationTimestamp="2025-12-15 05:52:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:52:32.462543531 +0000 UTC m=+916.159055449" watchObservedRunningTime="2025-12-15 05:52:32.470148082 +0000 UTC m=+916.166660000" Dec 15 05:52:32 crc kubenswrapper[4747]: W1215 05:52:32.822034 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e5a51e1_29fc_4fe7_b6d4_5a3227a93ec7.slice/crio-6162df5678daff58458379ef3a6dae2c00f090291c474466314fc2a9c5bfd80f WatchSource:0}: Error finding container 6162df5678daff58458379ef3a6dae2c00f090291c474466314fc2a9c5bfd80f: Status 404 returned error can't find the container with id 6162df5678daff58458379ef3a6dae2c00f090291c474466314fc2a9c5bfd80f Dec 15 05:52:33 crc kubenswrapper[4747]: I1215 05:52:33.475916 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8","Type":"ContainerStarted","Data":"dc0b73a864df0515127d885874d7f1f5e1e02b0cecbde221190011f85b394b7d"} Dec 15 05:52:33 crc kubenswrapper[4747]: I1215 05:52:33.478538 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25lpq" event={"ID":"656ef4f9-8e82-43ce-b0f9-b654bcecb12a","Type":"ContainerStarted","Data":"b1b4e5ec228894434b1be940ea3fa0b3387bd126bee48d1e2b2895dd9b621ace"} Dec 15 05:52:33 crc kubenswrapper[4747]: I1215 05:52:33.483810 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"873188a0-9dbb-4c95-b39e-cd503e07e59f","Type":"ContainerStarted","Data":"c70b4cdb307a145485d475fb474f8b91fb623356651dc61ade264fe0001cb4f8"} Dec 15 05:52:33 crc kubenswrapper[4747]: I1215 05:52:33.485717 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84m7w" event={"ID":"e8515fd4-ce21-4f89-a703-e3807fa6fd90","Type":"ContainerStarted","Data":"850f93d17e36f8d99142b261f1ca55952d08b8c04196c71a24c28c350ff612bd"} Dec 15 05:52:33 crc kubenswrapper[4747]: I1215 05:52:33.487321 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77755588cd-rgjzw" event={"ID":"b6cfb859-aec3-41c6-bb59-7e84b23396bd","Type":"ContainerStarted","Data":"1301cbc440a4c64083f43926b1093ad190605f737fcb696187c0c22422a5854f"} Dec 15 05:52:33 crc kubenswrapper[4747]: I1215 05:52:33.487352 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77755588cd-rgjzw" event={"ID":"b6cfb859-aec3-41c6-bb59-7e84b23396bd","Type":"ContainerStarted","Data":"41a17a55928b61d3480f0803e540c0ff36e7018c2c92f3f46c56a1c54f86dd4d"} Dec 15 05:52:33 crc kubenswrapper[4747]: I1215 05:52:33.487472 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-77755588cd-rgjzw" Dec 15 05:52:33 crc kubenswrapper[4747]: I1215 05:52:33.488875 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"025696d8-212d-4b2b-bff8-87abde7b3a0b","Type":"ContainerStarted","Data":"ac19bbda4e664a2896828a1315442fcde4e5ace0b7c0c5289e002c127d811368"} Dec 15 05:52:33 crc kubenswrapper[4747]: I1215 05:52:33.491301 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b9f9565dc-vlcmk" event={"ID":"8e5a51e1-29fc-4fe7-b6d4-5a3227a93ec7","Type":"ContainerStarted","Data":"371cbf9dd0e9a255b92553eeadd578f38277653bc93b3b025abd9796d898ff42"} Dec 15 05:52:33 crc kubenswrapper[4747]: I1215 05:52:33.491345 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b9f9565dc-vlcmk" event={"ID":"8e5a51e1-29fc-4fe7-b6d4-5a3227a93ec7","Type":"ContainerStarted","Data":"6162df5678daff58458379ef3a6dae2c00f090291c474466314fc2a9c5bfd80f"} Dec 15 05:52:33 crc kubenswrapper[4747]: I1215 05:52:33.494199 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5798cc97cf-fkkrr" event={"ID":"e9d347b7-7c56-4a22-931e-88552ac24159","Type":"ContainerStarted","Data":"cfbdfe656ed85c86b90132e82fe57ed5a8073d9deafbf03db04c7e3403f95c59"} Dec 15 05:52:33 crc kubenswrapper[4747]: I1215 05:52:33.494329 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5798cc97cf-fkkrr" Dec 15 05:52:33 crc kubenswrapper[4747]: I1215 05:52:33.495658 4747 generic.go:334] "Generic (PLEG): container finished" podID="f285358e-df22-44d4-b3b4-5a2dc69399c6" containerID="a4e4c19ca50e70674267038dcfd17e451fb5cb7c03e62689bc0f274c534f4aa5" exitCode=0 Dec 15 05:52:33 crc kubenswrapper[4747]: I1215 05:52:33.495743 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-n8z94" event={"ID":"f285358e-df22-44d4-b3b4-5a2dc69399c6","Type":"ContainerDied","Data":"a4e4c19ca50e70674267038dcfd17e451fb5cb7c03e62689bc0f274c534f4aa5"} Dec 15 05:52:33 crc kubenswrapper[4747]: I1215 05:52:33.553101 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=13.55308422 podStartE2EDuration="13.55308422s" podCreationTimestamp="2025-12-15 05:52:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:52:33.551452451 +0000 UTC m=+917.247964369" watchObservedRunningTime="2025-12-15 05:52:33.55308422 +0000 UTC m=+917.249596137" Dec 15 05:52:33 crc kubenswrapper[4747]: I1215 05:52:33.570941 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-84m7w" podStartSLOduration=6.436149112 podStartE2EDuration="26.570905085s" podCreationTimestamp="2025-12-15 05:52:07 +0000 UTC" firstStartedPulling="2025-12-15 05:52:12.766054333 +0000 UTC m=+896.462566251" lastFinishedPulling="2025-12-15 05:52:32.900810307 +0000 UTC m=+916.597322224" observedRunningTime="2025-12-15 05:52:33.565204904 +0000 UTC m=+917.261716822" watchObservedRunningTime="2025-12-15 05:52:33.570905085 +0000 UTC m=+917.267417002" Dec 15 05:52:33 crc kubenswrapper[4747]: I1215 05:52:33.585216 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5798cc97cf-fkkrr" podStartSLOduration=12.585203004 podStartE2EDuration="12.585203004s" podCreationTimestamp="2025-12-15 05:52:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:52:33.580149139 +0000 UTC m=+917.276661056" watchObservedRunningTime="2025-12-15 05:52:33.585203004 +0000 UTC m=+917.281714921" Dec 15 05:52:33 crc kubenswrapper[4747]: I1215 05:52:33.604013 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-77755588cd-rgjzw" podStartSLOduration=12.604000655 podStartE2EDuration="12.604000655s" podCreationTimestamp="2025-12-15 05:52:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:52:33.598275128 +0000 UTC m=+917.294787045" watchObservedRunningTime="2025-12-15 05:52:33.604000655 +0000 UTC m=+917.300512572" Dec 15 05:52:34 crc kubenswrapper[4747]: I1215 05:52:34.505973 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8","Type":"ContainerStarted","Data":"b8a96438134d446f04ed9de3b208f822638cde09ceca85794ae49b14c867a63e"} Dec 15 05:52:34 crc kubenswrapper[4747]: I1215 05:52:34.510389 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b9f9565dc-vlcmk" event={"ID":"8e5a51e1-29fc-4fe7-b6d4-5a3227a93ec7","Type":"ContainerStarted","Data":"18d5a52962826444844e061eb0803062afc63e630a68cb7177819b0c5fb94037"} Dec 15 05:52:34 crc kubenswrapper[4747]: I1215 05:52:34.510473 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7b9f9565dc-vlcmk" Dec 15 05:52:34 crc kubenswrapper[4747]: I1215 05:52:34.512651 4747 generic.go:334] "Generic (PLEG): container finished" podID="656ef4f9-8e82-43ce-b0f9-b654bcecb12a" containerID="b1b4e5ec228894434b1be940ea3fa0b3387bd126bee48d1e2b2895dd9b621ace" exitCode=0 Dec 15 05:52:34 crc kubenswrapper[4747]: I1215 05:52:34.512732 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25lpq" event={"ID":"656ef4f9-8e82-43ce-b0f9-b654bcecb12a","Type":"ContainerDied","Data":"b1b4e5ec228894434b1be940ea3fa0b3387bd126bee48d1e2b2895dd9b621ace"} Dec 15 05:52:34 crc kubenswrapper[4747]: I1215 05:52:34.514788 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tm7d" event={"ID":"7aa44578-a974-4c1f-90db-014ecf544678","Type":"ContainerStarted","Data":"b4ca73ebb81e4e12a5d93ef322091d87b921b8f98024f11ff81cbd6aca39bf5c"} Dec 15 05:52:34 crc kubenswrapper[4747]: I1215 05:52:34.525553 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=14.525538683 podStartE2EDuration="14.525538683s" podCreationTimestamp="2025-12-15 05:52:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:52:34.5224161 +0000 UTC m=+918.218928027" watchObservedRunningTime="2025-12-15 05:52:34.525538683 +0000 UTC m=+918.222050590" Dec 15 05:52:34 crc kubenswrapper[4747]: I1215 05:52:34.578962 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7b9f9565dc-vlcmk" podStartSLOduration=11.578916987 podStartE2EDuration="11.578916987s" podCreationTimestamp="2025-12-15 05:52:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:52:34.577079632 +0000 UTC m=+918.273591539" watchObservedRunningTime="2025-12-15 05:52:34.578916987 +0000 UTC m=+918.275428904" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.078343 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-n8z94" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.161527 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f285358e-df22-44d4-b3b4-5a2dc69399c6-combined-ca-bundle\") pod \"f285358e-df22-44d4-b3b4-5a2dc69399c6\" (UID: \"f285358e-df22-44d4-b3b4-5a2dc69399c6\") " Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.161617 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w8fw\" (UniqueName: \"kubernetes.io/projected/f285358e-df22-44d4-b3b4-5a2dc69399c6-kube-api-access-2w8fw\") pod \"f285358e-df22-44d4-b3b4-5a2dc69399c6\" (UID: \"f285358e-df22-44d4-b3b4-5a2dc69399c6\") " Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.161782 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f285358e-df22-44d4-b3b4-5a2dc69399c6-logs\") pod \"f285358e-df22-44d4-b3b4-5a2dc69399c6\" (UID: \"f285358e-df22-44d4-b3b4-5a2dc69399c6\") " Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.161975 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f285358e-df22-44d4-b3b4-5a2dc69399c6-scripts\") pod \"f285358e-df22-44d4-b3b4-5a2dc69399c6\" (UID: \"f285358e-df22-44d4-b3b4-5a2dc69399c6\") " Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.162035 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f285358e-df22-44d4-b3b4-5a2dc69399c6-config-data\") pod \"f285358e-df22-44d4-b3b4-5a2dc69399c6\" (UID: \"f285358e-df22-44d4-b3b4-5a2dc69399c6\") " Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.170031 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f285358e-df22-44d4-b3b4-5a2dc69399c6-logs" (OuterVolumeSpecName: "logs") pod "f285358e-df22-44d4-b3b4-5a2dc69399c6" (UID: "f285358e-df22-44d4-b3b4-5a2dc69399c6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.175059 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f285358e-df22-44d4-b3b4-5a2dc69399c6-kube-api-access-2w8fw" (OuterVolumeSpecName: "kube-api-access-2w8fw") pod "f285358e-df22-44d4-b3b4-5a2dc69399c6" (UID: "f285358e-df22-44d4-b3b4-5a2dc69399c6"). InnerVolumeSpecName "kube-api-access-2w8fw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.198136 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f285358e-df22-44d4-b3b4-5a2dc69399c6-scripts" (OuterVolumeSpecName: "scripts") pod "f285358e-df22-44d4-b3b4-5a2dc69399c6" (UID: "f285358e-df22-44d4-b3b4-5a2dc69399c6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.206312 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f285358e-df22-44d4-b3b4-5a2dc69399c6-config-data" (OuterVolumeSpecName: "config-data") pod "f285358e-df22-44d4-b3b4-5a2dc69399c6" (UID: "f285358e-df22-44d4-b3b4-5a2dc69399c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.216737 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f285358e-df22-44d4-b3b4-5a2dc69399c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f285358e-df22-44d4-b3b4-5a2dc69399c6" (UID: "f285358e-df22-44d4-b3b4-5a2dc69399c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.264918 4747 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f285358e-df22-44d4-b3b4-5a2dc69399c6-logs\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.265697 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f285358e-df22-44d4-b3b4-5a2dc69399c6-scripts\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.265710 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f285358e-df22-44d4-b3b4-5a2dc69399c6-config-data\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.265723 4747 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f285358e-df22-44d4-b3b4-5a2dc69399c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.265735 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w8fw\" (UniqueName: \"kubernetes.io/projected/f285358e-df22-44d4-b3b4-5a2dc69399c6-kube-api-access-2w8fw\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.539254 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-n8z94" event={"ID":"f285358e-df22-44d4-b3b4-5a2dc69399c6","Type":"ContainerDied","Data":"c9af89d1a0d9c36a9bfb8d842436adce53b02ec8842456c2f480a0f03d5ba0db"} Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.539310 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9af89d1a0d9c36a9bfb8d842436adce53b02ec8842456c2f480a0f03d5ba0db" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.539712 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-n8z94" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.542550 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25lpq" event={"ID":"656ef4f9-8e82-43ce-b0f9-b654bcecb12a","Type":"ContainerStarted","Data":"722649aebf439cff5f3e65b76a739e8a8d7479c5072467c97f6e8da77ae71b08"} Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.577860 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-25lpq" podStartSLOduration=20.7994436 podStartE2EDuration="24.577845117s" podCreationTimestamp="2025-12-15 05:52:11 +0000 UTC" firstStartedPulling="2025-12-15 05:52:31.401716433 +0000 UTC m=+915.098228341" lastFinishedPulling="2025-12-15 05:52:35.180117941 +0000 UTC m=+918.876629858" observedRunningTime="2025-12-15 05:52:35.56169271 +0000 UTC m=+919.258204627" watchObservedRunningTime="2025-12-15 05:52:35.577845117 +0000 UTC m=+919.274357035" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.640013 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-d77548fc6-2zqkd"] Dec 15 05:52:35 crc kubenswrapper[4747]: E1215 05:52:35.640474 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f285358e-df22-44d4-b3b4-5a2dc69399c6" containerName="placement-db-sync" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.640494 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="f285358e-df22-44d4-b3b4-5a2dc69399c6" containerName="placement-db-sync" Dec 15 05:52:35 crc kubenswrapper[4747]: E1215 05:52:35.640517 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c603fa2b-48da-497f-82c0-9929a9e155a6" containerName="registry-server" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.640523 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="c603fa2b-48da-497f-82c0-9929a9e155a6" containerName="registry-server" Dec 15 05:52:35 crc kubenswrapper[4747]: E1215 05:52:35.640540 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bade9597-335c-43a4-9477-ab4f08999fa8" containerName="init" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.640545 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="bade9597-335c-43a4-9477-ab4f08999fa8" containerName="init" Dec 15 05:52:35 crc kubenswrapper[4747]: E1215 05:52:35.640555 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bade9597-335c-43a4-9477-ab4f08999fa8" containerName="dnsmasq-dns" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.640560 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="bade9597-335c-43a4-9477-ab4f08999fa8" containerName="dnsmasq-dns" Dec 15 05:52:35 crc kubenswrapper[4747]: E1215 05:52:35.640570 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c603fa2b-48da-497f-82c0-9929a9e155a6" containerName="extract-content" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.640575 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="c603fa2b-48da-497f-82c0-9929a9e155a6" containerName="extract-content" Dec 15 05:52:35 crc kubenswrapper[4747]: E1215 05:52:35.640595 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c603fa2b-48da-497f-82c0-9929a9e155a6" containerName="extract-utilities" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.640600 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="c603fa2b-48da-497f-82c0-9929a9e155a6" containerName="extract-utilities" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.640767 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="bade9597-335c-43a4-9477-ab4f08999fa8" containerName="dnsmasq-dns" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.640781 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="f285358e-df22-44d4-b3b4-5a2dc69399c6" containerName="placement-db-sync" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.640812 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="c603fa2b-48da-497f-82c0-9929a9e155a6" containerName="registry-server" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.641759 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d77548fc6-2zqkd" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.645348 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.645390 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.646744 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.646744 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.646749 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-84b9l" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.650901 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d77548fc6-2zqkd"] Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.773915 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18ea26dc-78f1-479e-9e7c-722632f9304d-config-data\") pod \"placement-d77548fc6-2zqkd\" (UID: \"18ea26dc-78f1-479e-9e7c-722632f9304d\") " pod="openstack/placement-d77548fc6-2zqkd" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.774238 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/18ea26dc-78f1-479e-9e7c-722632f9304d-internal-tls-certs\") pod \"placement-d77548fc6-2zqkd\" (UID: \"18ea26dc-78f1-479e-9e7c-722632f9304d\") " pod="openstack/placement-d77548fc6-2zqkd" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.774306 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18ea26dc-78f1-479e-9e7c-722632f9304d-public-tls-certs\") pod \"placement-d77548fc6-2zqkd\" (UID: \"18ea26dc-78f1-479e-9e7c-722632f9304d\") " pod="openstack/placement-d77548fc6-2zqkd" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.774354 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18ea26dc-78f1-479e-9e7c-722632f9304d-logs\") pod \"placement-d77548fc6-2zqkd\" (UID: \"18ea26dc-78f1-479e-9e7c-722632f9304d\") " pod="openstack/placement-d77548fc6-2zqkd" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.774383 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p775q\" (UniqueName: \"kubernetes.io/projected/18ea26dc-78f1-479e-9e7c-722632f9304d-kube-api-access-p775q\") pod \"placement-d77548fc6-2zqkd\" (UID: \"18ea26dc-78f1-479e-9e7c-722632f9304d\") " pod="openstack/placement-d77548fc6-2zqkd" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.774410 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18ea26dc-78f1-479e-9e7c-722632f9304d-combined-ca-bundle\") pod \"placement-d77548fc6-2zqkd\" (UID: \"18ea26dc-78f1-479e-9e7c-722632f9304d\") " pod="openstack/placement-d77548fc6-2zqkd" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.774435 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18ea26dc-78f1-479e-9e7c-722632f9304d-scripts\") pod \"placement-d77548fc6-2zqkd\" (UID: \"18ea26dc-78f1-479e-9e7c-722632f9304d\") " pod="openstack/placement-d77548fc6-2zqkd" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.876167 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18ea26dc-78f1-479e-9e7c-722632f9304d-public-tls-certs\") pod \"placement-d77548fc6-2zqkd\" (UID: \"18ea26dc-78f1-479e-9e7c-722632f9304d\") " pod="openstack/placement-d77548fc6-2zqkd" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.876252 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18ea26dc-78f1-479e-9e7c-722632f9304d-logs\") pod \"placement-d77548fc6-2zqkd\" (UID: \"18ea26dc-78f1-479e-9e7c-722632f9304d\") " pod="openstack/placement-d77548fc6-2zqkd" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.876285 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p775q\" (UniqueName: \"kubernetes.io/projected/18ea26dc-78f1-479e-9e7c-722632f9304d-kube-api-access-p775q\") pod \"placement-d77548fc6-2zqkd\" (UID: \"18ea26dc-78f1-479e-9e7c-722632f9304d\") " pod="openstack/placement-d77548fc6-2zqkd" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.876337 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18ea26dc-78f1-479e-9e7c-722632f9304d-combined-ca-bundle\") pod \"placement-d77548fc6-2zqkd\" (UID: \"18ea26dc-78f1-479e-9e7c-722632f9304d\") " pod="openstack/placement-d77548fc6-2zqkd" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.876360 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18ea26dc-78f1-479e-9e7c-722632f9304d-scripts\") pod \"placement-d77548fc6-2zqkd\" (UID: \"18ea26dc-78f1-479e-9e7c-722632f9304d\") " pod="openstack/placement-d77548fc6-2zqkd" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.876449 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18ea26dc-78f1-479e-9e7c-722632f9304d-config-data\") pod \"placement-d77548fc6-2zqkd\" (UID: \"18ea26dc-78f1-479e-9e7c-722632f9304d\") " pod="openstack/placement-d77548fc6-2zqkd" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.876511 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/18ea26dc-78f1-479e-9e7c-722632f9304d-internal-tls-certs\") pod \"placement-d77548fc6-2zqkd\" (UID: \"18ea26dc-78f1-479e-9e7c-722632f9304d\") " pod="openstack/placement-d77548fc6-2zqkd" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.884623 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18ea26dc-78f1-479e-9e7c-722632f9304d-logs\") pod \"placement-d77548fc6-2zqkd\" (UID: \"18ea26dc-78f1-479e-9e7c-722632f9304d\") " pod="openstack/placement-d77548fc6-2zqkd" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.884829 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/18ea26dc-78f1-479e-9e7c-722632f9304d-internal-tls-certs\") pod \"placement-d77548fc6-2zqkd\" (UID: \"18ea26dc-78f1-479e-9e7c-722632f9304d\") " pod="openstack/placement-d77548fc6-2zqkd" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.886380 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18ea26dc-78f1-479e-9e7c-722632f9304d-combined-ca-bundle\") pod \"placement-d77548fc6-2zqkd\" (UID: \"18ea26dc-78f1-479e-9e7c-722632f9304d\") " pod="openstack/placement-d77548fc6-2zqkd" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.886500 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18ea26dc-78f1-479e-9e7c-722632f9304d-scripts\") pod \"placement-d77548fc6-2zqkd\" (UID: \"18ea26dc-78f1-479e-9e7c-722632f9304d\") " pod="openstack/placement-d77548fc6-2zqkd" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.887397 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18ea26dc-78f1-479e-9e7c-722632f9304d-public-tls-certs\") pod \"placement-d77548fc6-2zqkd\" (UID: \"18ea26dc-78f1-479e-9e7c-722632f9304d\") " pod="openstack/placement-d77548fc6-2zqkd" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.889279 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18ea26dc-78f1-479e-9e7c-722632f9304d-config-data\") pod \"placement-d77548fc6-2zqkd\" (UID: \"18ea26dc-78f1-479e-9e7c-722632f9304d\") " pod="openstack/placement-d77548fc6-2zqkd" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.907720 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p775q\" (UniqueName: \"kubernetes.io/projected/18ea26dc-78f1-479e-9e7c-722632f9304d-kube-api-access-p775q\") pod \"placement-d77548fc6-2zqkd\" (UID: \"18ea26dc-78f1-479e-9e7c-722632f9304d\") " pod="openstack/placement-d77548fc6-2zqkd" Dec 15 05:52:35 crc kubenswrapper[4747]: I1215 05:52:35.972085 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d77548fc6-2zqkd" Dec 15 05:52:36 crc kubenswrapper[4747]: I1215 05:52:36.566743 4747 generic.go:334] "Generic (PLEG): container finished" podID="7acaca59-7888-4a05-8eb3-f925f2f8d44b" containerID="d75b321f864f0dbe87288c0553a53f18aefaae1564116ae1192de1a875b1715b" exitCode=0 Dec 15 05:52:36 crc kubenswrapper[4747]: I1215 05:52:36.566898 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-zsd2g" event={"ID":"7acaca59-7888-4a05-8eb3-f925f2f8d44b","Type":"ContainerDied","Data":"d75b321f864f0dbe87288c0553a53f18aefaae1564116ae1192de1a875b1715b"} Dec 15 05:52:36 crc kubenswrapper[4747]: I1215 05:52:36.571311 4747 generic.go:334] "Generic (PLEG): container finished" podID="7aa44578-a974-4c1f-90db-014ecf544678" containerID="b4ca73ebb81e4e12a5d93ef322091d87b921b8f98024f11ff81cbd6aca39bf5c" exitCode=0 Dec 15 05:52:36 crc kubenswrapper[4747]: I1215 05:52:36.572465 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tm7d" event={"ID":"7aa44578-a974-4c1f-90db-014ecf544678","Type":"ContainerDied","Data":"b4ca73ebb81e4e12a5d93ef322091d87b921b8f98024f11ff81cbd6aca39bf5c"} Dec 15 05:52:37 crc kubenswrapper[4747]: I1215 05:52:37.581853 4747 generic.go:334] "Generic (PLEG): container finished" podID="a4170a18-3a02-40ea-ab35-838243909dc0" containerID="30090006726f4f8f2e101e709cace0d7149cb9395ae58234657fe2d8507ecf04" exitCode=0 Dec 15 05:52:37 crc kubenswrapper[4747]: I1215 05:52:37.582072 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-d5wlm" event={"ID":"a4170a18-3a02-40ea-ab35-838243909dc0","Type":"ContainerDied","Data":"30090006726f4f8f2e101e709cace0d7149cb9395ae58234657fe2d8507ecf04"} Dec 15 05:52:38 crc kubenswrapper[4747]: I1215 05:52:38.305588 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-84m7w" Dec 15 05:52:38 crc kubenswrapper[4747]: I1215 05:52:38.306439 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-84m7w" Dec 15 05:52:39 crc kubenswrapper[4747]: I1215 05:52:39.358537 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-84m7w" podUID="e8515fd4-ce21-4f89-a703-e3807fa6fd90" containerName="registry-server" probeResult="failure" output=< Dec 15 05:52:39 crc kubenswrapper[4747]: timeout: failed to connect service ":50051" within 1s Dec 15 05:52:39 crc kubenswrapper[4747]: > Dec 15 05:52:39 crc kubenswrapper[4747]: I1215 05:52:39.617446 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-zsd2g" event={"ID":"7acaca59-7888-4a05-8eb3-f925f2f8d44b","Type":"ContainerDied","Data":"17b01294a346719f951e85c8bb608f04c17d6003bbd9bc4b14bd0bf8056e5606"} Dec 15 05:52:39 crc kubenswrapper[4747]: I1215 05:52:39.617785 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17b01294a346719f951e85c8bb608f04c17d6003bbd9bc4b14bd0bf8056e5606" Dec 15 05:52:39 crc kubenswrapper[4747]: I1215 05:52:39.621525 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-d5wlm" event={"ID":"a4170a18-3a02-40ea-ab35-838243909dc0","Type":"ContainerDied","Data":"e396d352c98b8afb853b5cb449d0abd5e2dc59221c1af135adfdd6ae512f0108"} Dec 15 05:52:39 crc kubenswrapper[4747]: I1215 05:52:39.621603 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e396d352c98b8afb853b5cb449d0abd5e2dc59221c1af135adfdd6ae512f0108" Dec 15 05:52:39 crc kubenswrapper[4747]: I1215 05:52:39.729168 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-d5wlm" Dec 15 05:52:39 crc kubenswrapper[4747]: I1215 05:52:39.730771 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-zsd2g" Dec 15 05:52:39 crc kubenswrapper[4747]: I1215 05:52:39.881034 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a4170a18-3a02-40ea-ab35-838243909dc0-fernet-keys\") pod \"a4170a18-3a02-40ea-ab35-838243909dc0\" (UID: \"a4170a18-3a02-40ea-ab35-838243909dc0\") " Dec 15 05:52:39 crc kubenswrapper[4747]: I1215 05:52:39.881822 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7acaca59-7888-4a05-8eb3-f925f2f8d44b-db-sync-config-data\") pod \"7acaca59-7888-4a05-8eb3-f925f2f8d44b\" (UID: \"7acaca59-7888-4a05-8eb3-f925f2f8d44b\") " Dec 15 05:52:39 crc kubenswrapper[4747]: I1215 05:52:39.881869 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7acaca59-7888-4a05-8eb3-f925f2f8d44b-combined-ca-bundle\") pod \"7acaca59-7888-4a05-8eb3-f925f2f8d44b\" (UID: \"7acaca59-7888-4a05-8eb3-f925f2f8d44b\") " Dec 15 05:52:39 crc kubenswrapper[4747]: I1215 05:52:39.881960 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nf9tq\" (UniqueName: \"kubernetes.io/projected/7acaca59-7888-4a05-8eb3-f925f2f8d44b-kube-api-access-nf9tq\") pod \"7acaca59-7888-4a05-8eb3-f925f2f8d44b\" (UID: \"7acaca59-7888-4a05-8eb3-f925f2f8d44b\") " Dec 15 05:52:39 crc kubenswrapper[4747]: I1215 05:52:39.881992 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4170a18-3a02-40ea-ab35-838243909dc0-scripts\") pod \"a4170a18-3a02-40ea-ab35-838243909dc0\" (UID: \"a4170a18-3a02-40ea-ab35-838243909dc0\") " Dec 15 05:52:39 crc kubenswrapper[4747]: I1215 05:52:39.882018 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6dlq\" (UniqueName: \"kubernetes.io/projected/a4170a18-3a02-40ea-ab35-838243909dc0-kube-api-access-l6dlq\") pod \"a4170a18-3a02-40ea-ab35-838243909dc0\" (UID: \"a4170a18-3a02-40ea-ab35-838243909dc0\") " Dec 15 05:52:39 crc kubenswrapper[4747]: I1215 05:52:39.882052 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4170a18-3a02-40ea-ab35-838243909dc0-config-data\") pod \"a4170a18-3a02-40ea-ab35-838243909dc0\" (UID: \"a4170a18-3a02-40ea-ab35-838243909dc0\") " Dec 15 05:52:39 crc kubenswrapper[4747]: I1215 05:52:39.882086 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a4170a18-3a02-40ea-ab35-838243909dc0-credential-keys\") pod \"a4170a18-3a02-40ea-ab35-838243909dc0\" (UID: \"a4170a18-3a02-40ea-ab35-838243909dc0\") " Dec 15 05:52:39 crc kubenswrapper[4747]: I1215 05:52:39.882128 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4170a18-3a02-40ea-ab35-838243909dc0-combined-ca-bundle\") pod \"a4170a18-3a02-40ea-ab35-838243909dc0\" (UID: \"a4170a18-3a02-40ea-ab35-838243909dc0\") " Dec 15 05:52:39 crc kubenswrapper[4747]: I1215 05:52:39.886496 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4170a18-3a02-40ea-ab35-838243909dc0-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a4170a18-3a02-40ea-ab35-838243909dc0" (UID: "a4170a18-3a02-40ea-ab35-838243909dc0"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:52:39 crc kubenswrapper[4747]: I1215 05:52:39.890062 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4170a18-3a02-40ea-ab35-838243909dc0-kube-api-access-l6dlq" (OuterVolumeSpecName: "kube-api-access-l6dlq") pod "a4170a18-3a02-40ea-ab35-838243909dc0" (UID: "a4170a18-3a02-40ea-ab35-838243909dc0"). InnerVolumeSpecName "kube-api-access-l6dlq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:52:39 crc kubenswrapper[4747]: I1215 05:52:39.890065 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7acaca59-7888-4a05-8eb3-f925f2f8d44b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "7acaca59-7888-4a05-8eb3-f925f2f8d44b" (UID: "7acaca59-7888-4a05-8eb3-f925f2f8d44b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:52:39 crc kubenswrapper[4747]: I1215 05:52:39.891626 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4170a18-3a02-40ea-ab35-838243909dc0-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "a4170a18-3a02-40ea-ab35-838243909dc0" (UID: "a4170a18-3a02-40ea-ab35-838243909dc0"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:52:39 crc kubenswrapper[4747]: I1215 05:52:39.892546 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7acaca59-7888-4a05-8eb3-f925f2f8d44b-kube-api-access-nf9tq" (OuterVolumeSpecName: "kube-api-access-nf9tq") pod "7acaca59-7888-4a05-8eb3-f925f2f8d44b" (UID: "7acaca59-7888-4a05-8eb3-f925f2f8d44b"). InnerVolumeSpecName "kube-api-access-nf9tq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:52:39 crc kubenswrapper[4747]: I1215 05:52:39.893123 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4170a18-3a02-40ea-ab35-838243909dc0-scripts" (OuterVolumeSpecName: "scripts") pod "a4170a18-3a02-40ea-ab35-838243909dc0" (UID: "a4170a18-3a02-40ea-ab35-838243909dc0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:52:39 crc kubenswrapper[4747]: I1215 05:52:39.917261 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4170a18-3a02-40ea-ab35-838243909dc0-config-data" (OuterVolumeSpecName: "config-data") pod "a4170a18-3a02-40ea-ab35-838243909dc0" (UID: "a4170a18-3a02-40ea-ab35-838243909dc0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:52:39 crc kubenswrapper[4747]: I1215 05:52:39.919810 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7acaca59-7888-4a05-8eb3-f925f2f8d44b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7acaca59-7888-4a05-8eb3-f925f2f8d44b" (UID: "7acaca59-7888-4a05-8eb3-f925f2f8d44b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:52:39 crc kubenswrapper[4747]: I1215 05:52:39.922216 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4170a18-3a02-40ea-ab35-838243909dc0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a4170a18-3a02-40ea-ab35-838243909dc0" (UID: "a4170a18-3a02-40ea-ab35-838243909dc0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:52:39 crc kubenswrapper[4747]: I1215 05:52:39.983678 4747 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7acaca59-7888-4a05-8eb3-f925f2f8d44b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:39 crc kubenswrapper[4747]: I1215 05:52:39.983896 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nf9tq\" (UniqueName: \"kubernetes.io/projected/7acaca59-7888-4a05-8eb3-f925f2f8d44b-kube-api-access-nf9tq\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:39 crc kubenswrapper[4747]: I1215 05:52:39.983909 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4170a18-3a02-40ea-ab35-838243909dc0-scripts\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:39 crc kubenswrapper[4747]: I1215 05:52:39.983918 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6dlq\" (UniqueName: \"kubernetes.io/projected/a4170a18-3a02-40ea-ab35-838243909dc0-kube-api-access-l6dlq\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:39 crc kubenswrapper[4747]: I1215 05:52:39.983940 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4170a18-3a02-40ea-ab35-838243909dc0-config-data\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:39 crc kubenswrapper[4747]: I1215 05:52:39.983950 4747 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a4170a18-3a02-40ea-ab35-838243909dc0-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:39 crc kubenswrapper[4747]: I1215 05:52:39.983958 4747 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4170a18-3a02-40ea-ab35-838243909dc0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:39 crc kubenswrapper[4747]: I1215 05:52:39.983965 4747 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a4170a18-3a02-40ea-ab35-838243909dc0-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:39 crc kubenswrapper[4747]: I1215 05:52:39.983973 4747 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7acaca59-7888-4a05-8eb3-f925f2f8d44b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:40 crc kubenswrapper[4747]: I1215 05:52:40.157318 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d77548fc6-2zqkd"] Dec 15 05:52:40 crc kubenswrapper[4747]: W1215 05:52:40.160464 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18ea26dc_78f1_479e_9e7c_722632f9304d.slice/crio-f1451f1cf3f183fdadbb884a208332254a2cae5c89c90c5b8631ab8af857214a WatchSource:0}: Error finding container f1451f1cf3f183fdadbb884a208332254a2cae5c89c90c5b8631ab8af857214a: Status 404 returned error can't find the container with id f1451f1cf3f183fdadbb884a208332254a2cae5c89c90c5b8631ab8af857214a Dec 15 05:52:40 crc kubenswrapper[4747]: I1215 05:52:40.642128 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"873188a0-9dbb-4c95-b39e-cd503e07e59f","Type":"ContainerStarted","Data":"f4e96925bcfa06c1f0423d51300c6d70ba8b0eaceb34083e2095061f09f56e24"} Dec 15 05:52:40 crc kubenswrapper[4747]: I1215 05:52:40.642408 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tm7d" event={"ID":"7aa44578-a974-4c1f-90db-014ecf544678","Type":"ContainerStarted","Data":"85d812e3d1f7bc97ecfe41c52a65a3a2b62bfb65ac6b2bd6e8322b67c7dac138"} Dec 15 05:52:40 crc kubenswrapper[4747]: I1215 05:52:40.650742 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-d5wlm" Dec 15 05:52:40 crc kubenswrapper[4747]: I1215 05:52:40.650771 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d77548fc6-2zqkd" event={"ID":"18ea26dc-78f1-479e-9e7c-722632f9304d","Type":"ContainerStarted","Data":"337766628d3ceee9015cc8a1b9c8f8ab04a40333cb19aba007a1d10ee4f39eb4"} Dec 15 05:52:40 crc kubenswrapper[4747]: I1215 05:52:40.650814 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d77548fc6-2zqkd" event={"ID":"18ea26dc-78f1-479e-9e7c-722632f9304d","Type":"ContainerStarted","Data":"dca7f8dbfbdbc3661e123a2e0e86074abbe73e85b574bfeb4f0cb4b34fef1308"} Dec 15 05:52:40 crc kubenswrapper[4747]: I1215 05:52:40.650824 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d77548fc6-2zqkd" event={"ID":"18ea26dc-78f1-479e-9e7c-722632f9304d","Type":"ContainerStarted","Data":"f1451f1cf3f183fdadbb884a208332254a2cae5c89c90c5b8631ab8af857214a"} Dec 15 05:52:40 crc kubenswrapper[4747]: I1215 05:52:40.651918 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-zsd2g" Dec 15 05:52:40 crc kubenswrapper[4747]: I1215 05:52:40.652063 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-d77548fc6-2zqkd" Dec 15 05:52:40 crc kubenswrapper[4747]: I1215 05:52:40.652106 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-d77548fc6-2zqkd" Dec 15 05:52:40 crc kubenswrapper[4747]: I1215 05:52:40.670134 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5tm7d" podStartSLOduration=17.825308163 podStartE2EDuration="24.670124267s" podCreationTimestamp="2025-12-15 05:52:16 +0000 UTC" firstStartedPulling="2025-12-15 05:52:32.815355319 +0000 UTC m=+916.511867236" lastFinishedPulling="2025-12-15 05:52:39.660171423 +0000 UTC m=+923.356683340" observedRunningTime="2025-12-15 05:52:40.660072863 +0000 UTC m=+924.356584780" watchObservedRunningTime="2025-12-15 05:52:40.670124267 +0000 UTC m=+924.366636184" Dec 15 05:52:40 crc kubenswrapper[4747]: I1215 05:52:40.769310 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-d77548fc6-2zqkd" podStartSLOduration=5.769287926 podStartE2EDuration="5.769287926s" podCreationTimestamp="2025-12-15 05:52:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:52:40.680901905 +0000 UTC m=+924.377413822" watchObservedRunningTime="2025-12-15 05:52:40.769287926 +0000 UTC m=+924.465799843" Dec 15 05:52:40 crc kubenswrapper[4747]: I1215 05:52:40.822505 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6b5fccc9fc-25v6s"] Dec 15 05:52:40 crc kubenswrapper[4747]: E1215 05:52:40.823381 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7acaca59-7888-4a05-8eb3-f925f2f8d44b" containerName="barbican-db-sync" Dec 15 05:52:40 crc kubenswrapper[4747]: I1215 05:52:40.823477 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="7acaca59-7888-4a05-8eb3-f925f2f8d44b" containerName="barbican-db-sync" Dec 15 05:52:40 crc kubenswrapper[4747]: E1215 05:52:40.823553 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4170a18-3a02-40ea-ab35-838243909dc0" containerName="keystone-bootstrap" Dec 15 05:52:40 crc kubenswrapper[4747]: I1215 05:52:40.823622 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4170a18-3a02-40ea-ab35-838243909dc0" containerName="keystone-bootstrap" Dec 15 05:52:40 crc kubenswrapper[4747]: I1215 05:52:40.823880 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="7acaca59-7888-4a05-8eb3-f925f2f8d44b" containerName="barbican-db-sync" Dec 15 05:52:40 crc kubenswrapper[4747]: I1215 05:52:40.824030 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4170a18-3a02-40ea-ab35-838243909dc0" containerName="keystone-bootstrap" Dec 15 05:52:40 crc kubenswrapper[4747]: I1215 05:52:40.824908 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6b5fccc9fc-25v6s" Dec 15 05:52:40 crc kubenswrapper[4747]: I1215 05:52:40.826966 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Dec 15 05:52:40 crc kubenswrapper[4747]: I1215 05:52:40.829481 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Dec 15 05:52:40 crc kubenswrapper[4747]: I1215 05:52:40.829585 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-x9mhm" Dec 15 05:52:40 crc kubenswrapper[4747]: I1215 05:52:40.829877 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 15 05:52:40 crc kubenswrapper[4747]: I1215 05:52:40.830090 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 15 05:52:40 crc kubenswrapper[4747]: I1215 05:52:40.830190 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 15 05:52:40 crc kubenswrapper[4747]: I1215 05:52:40.843760 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6b5fccc9fc-25v6s"] Dec 15 05:52:40 crc kubenswrapper[4747]: I1215 05:52:40.897075 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 15 05:52:40 crc kubenswrapper[4747]: I1215 05:52:40.897283 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 15 05:52:40 crc kubenswrapper[4747]: I1215 05:52:40.908346 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f0cf723-d247-4d37-95f2-2ba1318f3e27-internal-tls-certs\") pod \"keystone-6b5fccc9fc-25v6s\" (UID: \"3f0cf723-d247-4d37-95f2-2ba1318f3e27\") " pod="openstack/keystone-6b5fccc9fc-25v6s" Dec 15 05:52:40 crc kubenswrapper[4747]: I1215 05:52:40.908422 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f0cf723-d247-4d37-95f2-2ba1318f3e27-scripts\") pod \"keystone-6b5fccc9fc-25v6s\" (UID: \"3f0cf723-d247-4d37-95f2-2ba1318f3e27\") " pod="openstack/keystone-6b5fccc9fc-25v6s" Dec 15 05:52:40 crc kubenswrapper[4747]: I1215 05:52:40.908468 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkmwq\" (UniqueName: \"kubernetes.io/projected/3f0cf723-d247-4d37-95f2-2ba1318f3e27-kube-api-access-kkmwq\") pod \"keystone-6b5fccc9fc-25v6s\" (UID: \"3f0cf723-d247-4d37-95f2-2ba1318f3e27\") " pod="openstack/keystone-6b5fccc9fc-25v6s" Dec 15 05:52:40 crc kubenswrapper[4747]: I1215 05:52:40.908544 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f0cf723-d247-4d37-95f2-2ba1318f3e27-combined-ca-bundle\") pod \"keystone-6b5fccc9fc-25v6s\" (UID: \"3f0cf723-d247-4d37-95f2-2ba1318f3e27\") " pod="openstack/keystone-6b5fccc9fc-25v6s" Dec 15 05:52:40 crc kubenswrapper[4747]: I1215 05:52:40.908581 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3f0cf723-d247-4d37-95f2-2ba1318f3e27-credential-keys\") pod \"keystone-6b5fccc9fc-25v6s\" (UID: \"3f0cf723-d247-4d37-95f2-2ba1318f3e27\") " pod="openstack/keystone-6b5fccc9fc-25v6s" Dec 15 05:52:40 crc kubenswrapper[4747]: I1215 05:52:40.908806 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3f0cf723-d247-4d37-95f2-2ba1318f3e27-fernet-keys\") pod \"keystone-6b5fccc9fc-25v6s\" (UID: \"3f0cf723-d247-4d37-95f2-2ba1318f3e27\") " pod="openstack/keystone-6b5fccc9fc-25v6s" Dec 15 05:52:40 crc kubenswrapper[4747]: I1215 05:52:40.908909 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f0cf723-d247-4d37-95f2-2ba1318f3e27-public-tls-certs\") pod \"keystone-6b5fccc9fc-25v6s\" (UID: \"3f0cf723-d247-4d37-95f2-2ba1318f3e27\") " pod="openstack/keystone-6b5fccc9fc-25v6s" Dec 15 05:52:40 crc kubenswrapper[4747]: I1215 05:52:40.909052 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f0cf723-d247-4d37-95f2-2ba1318f3e27-config-data\") pod \"keystone-6b5fccc9fc-25v6s\" (UID: \"3f0cf723-d247-4d37-95f2-2ba1318f3e27\") " pod="openstack/keystone-6b5fccc9fc-25v6s" Dec 15 05:52:40 crc kubenswrapper[4747]: I1215 05:52:40.925167 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 15 05:52:40 crc kubenswrapper[4747]: I1215 05:52:40.925332 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 15 05:52:40 crc kubenswrapper[4747]: I1215 05:52:40.932185 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 15 05:52:40 crc kubenswrapper[4747]: I1215 05:52:40.933193 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 15 05:52:40 crc kubenswrapper[4747]: I1215 05:52:40.950321 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 15 05:52:40 crc kubenswrapper[4747]: I1215 05:52:40.957983 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.011132 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f0cf723-d247-4d37-95f2-2ba1318f3e27-internal-tls-certs\") pod \"keystone-6b5fccc9fc-25v6s\" (UID: \"3f0cf723-d247-4d37-95f2-2ba1318f3e27\") " pod="openstack/keystone-6b5fccc9fc-25v6s" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.011201 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f0cf723-d247-4d37-95f2-2ba1318f3e27-scripts\") pod \"keystone-6b5fccc9fc-25v6s\" (UID: \"3f0cf723-d247-4d37-95f2-2ba1318f3e27\") " pod="openstack/keystone-6b5fccc9fc-25v6s" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.011242 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkmwq\" (UniqueName: \"kubernetes.io/projected/3f0cf723-d247-4d37-95f2-2ba1318f3e27-kube-api-access-kkmwq\") pod \"keystone-6b5fccc9fc-25v6s\" (UID: \"3f0cf723-d247-4d37-95f2-2ba1318f3e27\") " pod="openstack/keystone-6b5fccc9fc-25v6s" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.011321 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f0cf723-d247-4d37-95f2-2ba1318f3e27-combined-ca-bundle\") pod \"keystone-6b5fccc9fc-25v6s\" (UID: \"3f0cf723-d247-4d37-95f2-2ba1318f3e27\") " pod="openstack/keystone-6b5fccc9fc-25v6s" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.011346 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3f0cf723-d247-4d37-95f2-2ba1318f3e27-credential-keys\") pod \"keystone-6b5fccc9fc-25v6s\" (UID: \"3f0cf723-d247-4d37-95f2-2ba1318f3e27\") " pod="openstack/keystone-6b5fccc9fc-25v6s" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.011441 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3f0cf723-d247-4d37-95f2-2ba1318f3e27-fernet-keys\") pod \"keystone-6b5fccc9fc-25v6s\" (UID: \"3f0cf723-d247-4d37-95f2-2ba1318f3e27\") " pod="openstack/keystone-6b5fccc9fc-25v6s" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.011469 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f0cf723-d247-4d37-95f2-2ba1318f3e27-public-tls-certs\") pod \"keystone-6b5fccc9fc-25v6s\" (UID: \"3f0cf723-d247-4d37-95f2-2ba1318f3e27\") " pod="openstack/keystone-6b5fccc9fc-25v6s" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.011491 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f0cf723-d247-4d37-95f2-2ba1318f3e27-config-data\") pod \"keystone-6b5fccc9fc-25v6s\" (UID: \"3f0cf723-d247-4d37-95f2-2ba1318f3e27\") " pod="openstack/keystone-6b5fccc9fc-25v6s" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.028883 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f0cf723-d247-4d37-95f2-2ba1318f3e27-internal-tls-certs\") pod \"keystone-6b5fccc9fc-25v6s\" (UID: \"3f0cf723-d247-4d37-95f2-2ba1318f3e27\") " pod="openstack/keystone-6b5fccc9fc-25v6s" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.029211 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f0cf723-d247-4d37-95f2-2ba1318f3e27-scripts\") pod \"keystone-6b5fccc9fc-25v6s\" (UID: \"3f0cf723-d247-4d37-95f2-2ba1318f3e27\") " pod="openstack/keystone-6b5fccc9fc-25v6s" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.029474 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3f0cf723-d247-4d37-95f2-2ba1318f3e27-fernet-keys\") pod \"keystone-6b5fccc9fc-25v6s\" (UID: \"3f0cf723-d247-4d37-95f2-2ba1318f3e27\") " pod="openstack/keystone-6b5fccc9fc-25v6s" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.029653 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f0cf723-d247-4d37-95f2-2ba1318f3e27-public-tls-certs\") pod \"keystone-6b5fccc9fc-25v6s\" (UID: \"3f0cf723-d247-4d37-95f2-2ba1318f3e27\") " pod="openstack/keystone-6b5fccc9fc-25v6s" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.030024 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3f0cf723-d247-4d37-95f2-2ba1318f3e27-credential-keys\") pod \"keystone-6b5fccc9fc-25v6s\" (UID: \"3f0cf723-d247-4d37-95f2-2ba1318f3e27\") " pod="openstack/keystone-6b5fccc9fc-25v6s" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.031669 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f0cf723-d247-4d37-95f2-2ba1318f3e27-combined-ca-bundle\") pod \"keystone-6b5fccc9fc-25v6s\" (UID: \"3f0cf723-d247-4d37-95f2-2ba1318f3e27\") " pod="openstack/keystone-6b5fccc9fc-25v6s" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.032839 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f0cf723-d247-4d37-95f2-2ba1318f3e27-config-data\") pod \"keystone-6b5fccc9fc-25v6s\" (UID: \"3f0cf723-d247-4d37-95f2-2ba1318f3e27\") " pod="openstack/keystone-6b5fccc9fc-25v6s" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.056087 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkmwq\" (UniqueName: \"kubernetes.io/projected/3f0cf723-d247-4d37-95f2-2ba1318f3e27-kube-api-access-kkmwq\") pod \"keystone-6b5fccc9fc-25v6s\" (UID: \"3f0cf723-d247-4d37-95f2-2ba1318f3e27\") " pod="openstack/keystone-6b5fccc9fc-25v6s" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.059766 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-fb9798db-mhqvv"] Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.061741 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-fb9798db-mhqvv" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.065746 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-v6p95" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.066000 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.066152 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.070801 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-9b996c647-vsbr7"] Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.077465 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-9b996c647-vsbr7" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.081558 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.081746 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-fb9798db-mhqvv"] Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.091557 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-9b996c647-vsbr7"] Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.113254 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjzlr\" (UniqueName: \"kubernetes.io/projected/5adecd4c-fd5a-4186-866f-2de0e4f9a859-kube-api-access-tjzlr\") pod \"barbican-keystone-listener-fb9798db-mhqvv\" (UID: \"5adecd4c-fd5a-4186-866f-2de0e4f9a859\") " pod="openstack/barbican-keystone-listener-fb9798db-mhqvv" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.113298 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5adecd4c-fd5a-4186-866f-2de0e4f9a859-config-data-custom\") pod \"barbican-keystone-listener-fb9798db-mhqvv\" (UID: \"5adecd4c-fd5a-4186-866f-2de0e4f9a859\") " pod="openstack/barbican-keystone-listener-fb9798db-mhqvv" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.113375 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5adecd4c-fd5a-4186-866f-2de0e4f9a859-config-data\") pod \"barbican-keystone-listener-fb9798db-mhqvv\" (UID: \"5adecd4c-fd5a-4186-866f-2de0e4f9a859\") " pod="openstack/barbican-keystone-listener-fb9798db-mhqvv" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.113403 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5adecd4c-fd5a-4186-866f-2de0e4f9a859-logs\") pod \"barbican-keystone-listener-fb9798db-mhqvv\" (UID: \"5adecd4c-fd5a-4186-866f-2de0e4f9a859\") " pod="openstack/barbican-keystone-listener-fb9798db-mhqvv" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.113440 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5adecd4c-fd5a-4186-866f-2de0e4f9a859-combined-ca-bundle\") pod \"barbican-keystone-listener-fb9798db-mhqvv\" (UID: \"5adecd4c-fd5a-4186-866f-2de0e4f9a859\") " pod="openstack/barbican-keystone-listener-fb9798db-mhqvv" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.145053 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6b5fccc9fc-25v6s" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.158437 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5798cc97cf-fkkrr"] Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.158959 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5798cc97cf-fkkrr" podUID="e9d347b7-7c56-4a22-931e-88552ac24159" containerName="dnsmasq-dns" containerID="cri-o://cfbdfe656ed85c86b90132e82fe57ed5a8073d9deafbf03db04c7e3403f95c59" gracePeriod=10 Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.161605 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5798cc97cf-fkkrr" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.215835 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-796db5f74c-84jzt"] Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.217579 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-796db5f74c-84jzt" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.226857 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc262319-2445-42a7-9fb4-46f640216e00-combined-ca-bundle\") pod \"barbican-worker-9b996c647-vsbr7\" (UID: \"fc262319-2445-42a7-9fb4-46f640216e00\") " pod="openstack/barbican-worker-9b996c647-vsbr7" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.226991 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc262319-2445-42a7-9fb4-46f640216e00-config-data\") pod \"barbican-worker-9b996c647-vsbr7\" (UID: \"fc262319-2445-42a7-9fb4-46f640216e00\") " pod="openstack/barbican-worker-9b996c647-vsbr7" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.227134 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjzlr\" (UniqueName: \"kubernetes.io/projected/5adecd4c-fd5a-4186-866f-2de0e4f9a859-kube-api-access-tjzlr\") pod \"barbican-keystone-listener-fb9798db-mhqvv\" (UID: \"5adecd4c-fd5a-4186-866f-2de0e4f9a859\") " pod="openstack/barbican-keystone-listener-fb9798db-mhqvv" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.227185 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5adecd4c-fd5a-4186-866f-2de0e4f9a859-config-data-custom\") pod \"barbican-keystone-listener-fb9798db-mhqvv\" (UID: \"5adecd4c-fd5a-4186-866f-2de0e4f9a859\") " pod="openstack/barbican-keystone-listener-fb9798db-mhqvv" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.227365 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5adecd4c-fd5a-4186-866f-2de0e4f9a859-config-data\") pod \"barbican-keystone-listener-fb9798db-mhqvv\" (UID: \"5adecd4c-fd5a-4186-866f-2de0e4f9a859\") " pod="openstack/barbican-keystone-listener-fb9798db-mhqvv" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.227386 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fc262319-2445-42a7-9fb4-46f640216e00-config-data-custom\") pod \"barbican-worker-9b996c647-vsbr7\" (UID: \"fc262319-2445-42a7-9fb4-46f640216e00\") " pod="openstack/barbican-worker-9b996c647-vsbr7" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.227411 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5adecd4c-fd5a-4186-866f-2de0e4f9a859-logs\") pod \"barbican-keystone-listener-fb9798db-mhqvv\" (UID: \"5adecd4c-fd5a-4186-866f-2de0e4f9a859\") " pod="openstack/barbican-keystone-listener-fb9798db-mhqvv" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.227477 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5adecd4c-fd5a-4186-866f-2de0e4f9a859-combined-ca-bundle\") pod \"barbican-keystone-listener-fb9798db-mhqvv\" (UID: \"5adecd4c-fd5a-4186-866f-2de0e4f9a859\") " pod="openstack/barbican-keystone-listener-fb9798db-mhqvv" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.227517 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc262319-2445-42a7-9fb4-46f640216e00-logs\") pod \"barbican-worker-9b996c647-vsbr7\" (UID: \"fc262319-2445-42a7-9fb4-46f640216e00\") " pod="openstack/barbican-worker-9b996c647-vsbr7" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.227562 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrppz\" (UniqueName: \"kubernetes.io/projected/fc262319-2445-42a7-9fb4-46f640216e00-kube-api-access-vrppz\") pod \"barbican-worker-9b996c647-vsbr7\" (UID: \"fc262319-2445-42a7-9fb4-46f640216e00\") " pod="openstack/barbican-worker-9b996c647-vsbr7" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.228648 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5adecd4c-fd5a-4186-866f-2de0e4f9a859-logs\") pod \"barbican-keystone-listener-fb9798db-mhqvv\" (UID: \"5adecd4c-fd5a-4186-866f-2de0e4f9a859\") " pod="openstack/barbican-keystone-listener-fb9798db-mhqvv" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.234748 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5adecd4c-fd5a-4186-866f-2de0e4f9a859-combined-ca-bundle\") pod \"barbican-keystone-listener-fb9798db-mhqvv\" (UID: \"5adecd4c-fd5a-4186-866f-2de0e4f9a859\") " pod="openstack/barbican-keystone-listener-fb9798db-mhqvv" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.237003 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5adecd4c-fd5a-4186-866f-2de0e4f9a859-config-data-custom\") pod \"barbican-keystone-listener-fb9798db-mhqvv\" (UID: \"5adecd4c-fd5a-4186-866f-2de0e4f9a859\") " pod="openstack/barbican-keystone-listener-fb9798db-mhqvv" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.250418 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5adecd4c-fd5a-4186-866f-2de0e4f9a859-config-data\") pod \"barbican-keystone-listener-fb9798db-mhqvv\" (UID: \"5adecd4c-fd5a-4186-866f-2de0e4f9a859\") " pod="openstack/barbican-keystone-listener-fb9798db-mhqvv" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.252031 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjzlr\" (UniqueName: \"kubernetes.io/projected/5adecd4c-fd5a-4186-866f-2de0e4f9a859-kube-api-access-tjzlr\") pod \"barbican-keystone-listener-fb9798db-mhqvv\" (UID: \"5adecd4c-fd5a-4186-866f-2de0e4f9a859\") " pod="openstack/barbican-keystone-listener-fb9798db-mhqvv" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.253365 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-796db5f74c-84jzt"] Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.267411 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6f4bb4686b-tlv2x"] Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.273844 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6f4bb4686b-tlv2x" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.276872 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6f4bb4686b-tlv2x"] Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.277240 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.331270 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9e10c5cb-1a07-4941-885e-45dc50d15021-dns-swift-storage-0\") pod \"dnsmasq-dns-796db5f74c-84jzt\" (UID: \"9e10c5cb-1a07-4941-885e-45dc50d15021\") " pod="openstack/dnsmasq-dns-796db5f74c-84jzt" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.331321 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e10c5cb-1a07-4941-885e-45dc50d15021-config\") pod \"dnsmasq-dns-796db5f74c-84jzt\" (UID: \"9e10c5cb-1a07-4941-885e-45dc50d15021\") " pod="openstack/dnsmasq-dns-796db5f74c-84jzt" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.331338 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e10c5cb-1a07-4941-885e-45dc50d15021-dns-svc\") pod \"dnsmasq-dns-796db5f74c-84jzt\" (UID: \"9e10c5cb-1a07-4941-885e-45dc50d15021\") " pod="openstack/dnsmasq-dns-796db5f74c-84jzt" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.331387 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e10c5cb-1a07-4941-885e-45dc50d15021-ovsdbserver-sb\") pod \"dnsmasq-dns-796db5f74c-84jzt\" (UID: \"9e10c5cb-1a07-4941-885e-45dc50d15021\") " pod="openstack/dnsmasq-dns-796db5f74c-84jzt" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.331416 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fc262319-2445-42a7-9fb4-46f640216e00-config-data-custom\") pod \"barbican-worker-9b996c647-vsbr7\" (UID: \"fc262319-2445-42a7-9fb4-46f640216e00\") " pod="openstack/barbican-worker-9b996c647-vsbr7" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.331453 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt8tn\" (UniqueName: \"kubernetes.io/projected/9e10c5cb-1a07-4941-885e-45dc50d15021-kube-api-access-xt8tn\") pod \"dnsmasq-dns-796db5f74c-84jzt\" (UID: \"9e10c5cb-1a07-4941-885e-45dc50d15021\") " pod="openstack/dnsmasq-dns-796db5f74c-84jzt" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.331473 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc262319-2445-42a7-9fb4-46f640216e00-logs\") pod \"barbican-worker-9b996c647-vsbr7\" (UID: \"fc262319-2445-42a7-9fb4-46f640216e00\") " pod="openstack/barbican-worker-9b996c647-vsbr7" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.331499 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrppz\" (UniqueName: \"kubernetes.io/projected/fc262319-2445-42a7-9fb4-46f640216e00-kube-api-access-vrppz\") pod \"barbican-worker-9b996c647-vsbr7\" (UID: \"fc262319-2445-42a7-9fb4-46f640216e00\") " pod="openstack/barbican-worker-9b996c647-vsbr7" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.331526 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e10c5cb-1a07-4941-885e-45dc50d15021-ovsdbserver-nb\") pod \"dnsmasq-dns-796db5f74c-84jzt\" (UID: \"9e10c5cb-1a07-4941-885e-45dc50d15021\") " pod="openstack/dnsmasq-dns-796db5f74c-84jzt" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.331546 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc262319-2445-42a7-9fb4-46f640216e00-combined-ca-bundle\") pod \"barbican-worker-9b996c647-vsbr7\" (UID: \"fc262319-2445-42a7-9fb4-46f640216e00\") " pod="openstack/barbican-worker-9b996c647-vsbr7" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.331582 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc262319-2445-42a7-9fb4-46f640216e00-config-data\") pod \"barbican-worker-9b996c647-vsbr7\" (UID: \"fc262319-2445-42a7-9fb4-46f640216e00\") " pod="openstack/barbican-worker-9b996c647-vsbr7" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.333344 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc262319-2445-42a7-9fb4-46f640216e00-logs\") pod \"barbican-worker-9b996c647-vsbr7\" (UID: \"fc262319-2445-42a7-9fb4-46f640216e00\") " pod="openstack/barbican-worker-9b996c647-vsbr7" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.335880 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc262319-2445-42a7-9fb4-46f640216e00-config-data\") pod \"barbican-worker-9b996c647-vsbr7\" (UID: \"fc262319-2445-42a7-9fb4-46f640216e00\") " pod="openstack/barbican-worker-9b996c647-vsbr7" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.341536 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fc262319-2445-42a7-9fb4-46f640216e00-config-data-custom\") pod \"barbican-worker-9b996c647-vsbr7\" (UID: \"fc262319-2445-42a7-9fb4-46f640216e00\") " pod="openstack/barbican-worker-9b996c647-vsbr7" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.342745 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc262319-2445-42a7-9fb4-46f640216e00-combined-ca-bundle\") pod \"barbican-worker-9b996c647-vsbr7\" (UID: \"fc262319-2445-42a7-9fb4-46f640216e00\") " pod="openstack/barbican-worker-9b996c647-vsbr7" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.365373 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrppz\" (UniqueName: \"kubernetes.io/projected/fc262319-2445-42a7-9fb4-46f640216e00-kube-api-access-vrppz\") pod \"barbican-worker-9b996c647-vsbr7\" (UID: \"fc262319-2445-42a7-9fb4-46f640216e00\") " pod="openstack/barbican-worker-9b996c647-vsbr7" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.433185 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed94a5c7-82af-48dc-8592-440d39a321f7-combined-ca-bundle\") pod \"barbican-api-6f4bb4686b-tlv2x\" (UID: \"ed94a5c7-82af-48dc-8592-440d39a321f7\") " pod="openstack/barbican-api-6f4bb4686b-tlv2x" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.433240 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9e10c5cb-1a07-4941-885e-45dc50d15021-dns-swift-storage-0\") pod \"dnsmasq-dns-796db5f74c-84jzt\" (UID: \"9e10c5cb-1a07-4941-885e-45dc50d15021\") " pod="openstack/dnsmasq-dns-796db5f74c-84jzt" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.433279 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e10c5cb-1a07-4941-885e-45dc50d15021-config\") pod \"dnsmasq-dns-796db5f74c-84jzt\" (UID: \"9e10c5cb-1a07-4941-885e-45dc50d15021\") " pod="openstack/dnsmasq-dns-796db5f74c-84jzt" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.433297 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e10c5cb-1a07-4941-885e-45dc50d15021-dns-svc\") pod \"dnsmasq-dns-796db5f74c-84jzt\" (UID: \"9e10c5cb-1a07-4941-885e-45dc50d15021\") " pod="openstack/dnsmasq-dns-796db5f74c-84jzt" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.433321 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jv2t\" (UniqueName: \"kubernetes.io/projected/ed94a5c7-82af-48dc-8592-440d39a321f7-kube-api-access-5jv2t\") pod \"barbican-api-6f4bb4686b-tlv2x\" (UID: \"ed94a5c7-82af-48dc-8592-440d39a321f7\") " pod="openstack/barbican-api-6f4bb4686b-tlv2x" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.433348 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed94a5c7-82af-48dc-8592-440d39a321f7-config-data-custom\") pod \"barbican-api-6f4bb4686b-tlv2x\" (UID: \"ed94a5c7-82af-48dc-8592-440d39a321f7\") " pod="openstack/barbican-api-6f4bb4686b-tlv2x" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.433393 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e10c5cb-1a07-4941-885e-45dc50d15021-ovsdbserver-sb\") pod \"dnsmasq-dns-796db5f74c-84jzt\" (UID: \"9e10c5cb-1a07-4941-885e-45dc50d15021\") " pod="openstack/dnsmasq-dns-796db5f74c-84jzt" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.433428 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed94a5c7-82af-48dc-8592-440d39a321f7-config-data\") pod \"barbican-api-6f4bb4686b-tlv2x\" (UID: \"ed94a5c7-82af-48dc-8592-440d39a321f7\") " pod="openstack/barbican-api-6f4bb4686b-tlv2x" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.433460 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt8tn\" (UniqueName: \"kubernetes.io/projected/9e10c5cb-1a07-4941-885e-45dc50d15021-kube-api-access-xt8tn\") pod \"dnsmasq-dns-796db5f74c-84jzt\" (UID: \"9e10c5cb-1a07-4941-885e-45dc50d15021\") " pod="openstack/dnsmasq-dns-796db5f74c-84jzt" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.433478 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed94a5c7-82af-48dc-8592-440d39a321f7-logs\") pod \"barbican-api-6f4bb4686b-tlv2x\" (UID: \"ed94a5c7-82af-48dc-8592-440d39a321f7\") " pod="openstack/barbican-api-6f4bb4686b-tlv2x" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.433517 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e10c5cb-1a07-4941-885e-45dc50d15021-ovsdbserver-nb\") pod \"dnsmasq-dns-796db5f74c-84jzt\" (UID: \"9e10c5cb-1a07-4941-885e-45dc50d15021\") " pod="openstack/dnsmasq-dns-796db5f74c-84jzt" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.434362 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e10c5cb-1a07-4941-885e-45dc50d15021-ovsdbserver-nb\") pod \"dnsmasq-dns-796db5f74c-84jzt\" (UID: \"9e10c5cb-1a07-4941-885e-45dc50d15021\") " pod="openstack/dnsmasq-dns-796db5f74c-84jzt" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.434846 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9e10c5cb-1a07-4941-885e-45dc50d15021-dns-swift-storage-0\") pod \"dnsmasq-dns-796db5f74c-84jzt\" (UID: \"9e10c5cb-1a07-4941-885e-45dc50d15021\") " pod="openstack/dnsmasq-dns-796db5f74c-84jzt" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.435362 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e10c5cb-1a07-4941-885e-45dc50d15021-config\") pod \"dnsmasq-dns-796db5f74c-84jzt\" (UID: \"9e10c5cb-1a07-4941-885e-45dc50d15021\") " pod="openstack/dnsmasq-dns-796db5f74c-84jzt" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.437500 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e10c5cb-1a07-4941-885e-45dc50d15021-ovsdbserver-sb\") pod \"dnsmasq-dns-796db5f74c-84jzt\" (UID: \"9e10c5cb-1a07-4941-885e-45dc50d15021\") " pod="openstack/dnsmasq-dns-796db5f74c-84jzt" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.441385 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-fb9798db-mhqvv" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.447352 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e10c5cb-1a07-4941-885e-45dc50d15021-dns-svc\") pod \"dnsmasq-dns-796db5f74c-84jzt\" (UID: \"9e10c5cb-1a07-4941-885e-45dc50d15021\") " pod="openstack/dnsmasq-dns-796db5f74c-84jzt" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.458168 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-9b996c647-vsbr7" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.470519 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt8tn\" (UniqueName: \"kubernetes.io/projected/9e10c5cb-1a07-4941-885e-45dc50d15021-kube-api-access-xt8tn\") pod \"dnsmasq-dns-796db5f74c-84jzt\" (UID: \"9e10c5cb-1a07-4941-885e-45dc50d15021\") " pod="openstack/dnsmasq-dns-796db5f74c-84jzt" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.535736 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed94a5c7-82af-48dc-8592-440d39a321f7-combined-ca-bundle\") pod \"barbican-api-6f4bb4686b-tlv2x\" (UID: \"ed94a5c7-82af-48dc-8592-440d39a321f7\") " pod="openstack/barbican-api-6f4bb4686b-tlv2x" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.535787 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jv2t\" (UniqueName: \"kubernetes.io/projected/ed94a5c7-82af-48dc-8592-440d39a321f7-kube-api-access-5jv2t\") pod \"barbican-api-6f4bb4686b-tlv2x\" (UID: \"ed94a5c7-82af-48dc-8592-440d39a321f7\") " pod="openstack/barbican-api-6f4bb4686b-tlv2x" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.535815 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed94a5c7-82af-48dc-8592-440d39a321f7-config-data-custom\") pod \"barbican-api-6f4bb4686b-tlv2x\" (UID: \"ed94a5c7-82af-48dc-8592-440d39a321f7\") " pod="openstack/barbican-api-6f4bb4686b-tlv2x" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.535862 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed94a5c7-82af-48dc-8592-440d39a321f7-config-data\") pod \"barbican-api-6f4bb4686b-tlv2x\" (UID: \"ed94a5c7-82af-48dc-8592-440d39a321f7\") " pod="openstack/barbican-api-6f4bb4686b-tlv2x" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.535889 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed94a5c7-82af-48dc-8592-440d39a321f7-logs\") pod \"barbican-api-6f4bb4686b-tlv2x\" (UID: \"ed94a5c7-82af-48dc-8592-440d39a321f7\") " pod="openstack/barbican-api-6f4bb4686b-tlv2x" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.536278 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed94a5c7-82af-48dc-8592-440d39a321f7-logs\") pod \"barbican-api-6f4bb4686b-tlv2x\" (UID: \"ed94a5c7-82af-48dc-8592-440d39a321f7\") " pod="openstack/barbican-api-6f4bb4686b-tlv2x" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.542390 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed94a5c7-82af-48dc-8592-440d39a321f7-combined-ca-bundle\") pod \"barbican-api-6f4bb4686b-tlv2x\" (UID: \"ed94a5c7-82af-48dc-8592-440d39a321f7\") " pod="openstack/barbican-api-6f4bb4686b-tlv2x" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.571509 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed94a5c7-82af-48dc-8592-440d39a321f7-config-data-custom\") pod \"barbican-api-6f4bb4686b-tlv2x\" (UID: \"ed94a5c7-82af-48dc-8592-440d39a321f7\") " pod="openstack/barbican-api-6f4bb4686b-tlv2x" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.572469 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed94a5c7-82af-48dc-8592-440d39a321f7-config-data\") pod \"barbican-api-6f4bb4686b-tlv2x\" (UID: \"ed94a5c7-82af-48dc-8592-440d39a321f7\") " pod="openstack/barbican-api-6f4bb4686b-tlv2x" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.582437 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jv2t\" (UniqueName: \"kubernetes.io/projected/ed94a5c7-82af-48dc-8592-440d39a321f7-kube-api-access-5jv2t\") pod \"barbican-api-6f4bb4686b-tlv2x\" (UID: \"ed94a5c7-82af-48dc-8592-440d39a321f7\") " pod="openstack/barbican-api-6f4bb4686b-tlv2x" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.614452 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-796db5f74c-84jzt" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.625730 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6f4bb4686b-tlv2x" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.680335 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-25lpq" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.680381 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-25lpq" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.690691 4747 generic.go:334] "Generic (PLEG): container finished" podID="e9d347b7-7c56-4a22-931e-88552ac24159" containerID="cfbdfe656ed85c86b90132e82fe57ed5a8073d9deafbf03db04c7e3403f95c59" exitCode=0 Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.691855 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5798cc97cf-fkkrr" event={"ID":"e9d347b7-7c56-4a22-931e-88552ac24159","Type":"ContainerDied","Data":"cfbdfe656ed85c86b90132e82fe57ed5a8073d9deafbf03db04c7e3403f95c59"} Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.692872 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.693000 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.693014 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.693025 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.749855 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-25lpq" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.873282 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5798cc97cf-fkkrr" Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.950351 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6b5fccc9fc-25v6s"] Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.953318 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e9d347b7-7c56-4a22-931e-88552ac24159-ovsdbserver-nb\") pod \"e9d347b7-7c56-4a22-931e-88552ac24159\" (UID: \"e9d347b7-7c56-4a22-931e-88552ac24159\") " Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.954416 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9d347b7-7c56-4a22-931e-88552ac24159-config\") pod \"e9d347b7-7c56-4a22-931e-88552ac24159\" (UID: \"e9d347b7-7c56-4a22-931e-88552ac24159\") " Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.954697 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e9d347b7-7c56-4a22-931e-88552ac24159-dns-swift-storage-0\") pod \"e9d347b7-7c56-4a22-931e-88552ac24159\" (UID: \"e9d347b7-7c56-4a22-931e-88552ac24159\") " Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.954814 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e9d347b7-7c56-4a22-931e-88552ac24159-ovsdbserver-sb\") pod \"e9d347b7-7c56-4a22-931e-88552ac24159\" (UID: \"e9d347b7-7c56-4a22-931e-88552ac24159\") " Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.955347 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4wtc\" (UniqueName: \"kubernetes.io/projected/e9d347b7-7c56-4a22-931e-88552ac24159-kube-api-access-w4wtc\") pod \"e9d347b7-7c56-4a22-931e-88552ac24159\" (UID: \"e9d347b7-7c56-4a22-931e-88552ac24159\") " Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.955476 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9d347b7-7c56-4a22-931e-88552ac24159-dns-svc\") pod \"e9d347b7-7c56-4a22-931e-88552ac24159\" (UID: \"e9d347b7-7c56-4a22-931e-88552ac24159\") " Dec 15 05:52:41 crc kubenswrapper[4747]: I1215 05:52:41.990629 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9d347b7-7c56-4a22-931e-88552ac24159-kube-api-access-w4wtc" (OuterVolumeSpecName: "kube-api-access-w4wtc") pod "e9d347b7-7c56-4a22-931e-88552ac24159" (UID: "e9d347b7-7c56-4a22-931e-88552ac24159"). InnerVolumeSpecName "kube-api-access-w4wtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:52:42 crc kubenswrapper[4747]: I1215 05:52:42.058788 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9d347b7-7c56-4a22-931e-88552ac24159-config" (OuterVolumeSpecName: "config") pod "e9d347b7-7c56-4a22-931e-88552ac24159" (UID: "e9d347b7-7c56-4a22-931e-88552ac24159"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:52:42 crc kubenswrapper[4747]: I1215 05:52:42.059194 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4wtc\" (UniqueName: \"kubernetes.io/projected/e9d347b7-7c56-4a22-931e-88552ac24159-kube-api-access-w4wtc\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:42 crc kubenswrapper[4747]: I1215 05:52:42.070512 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9d347b7-7c56-4a22-931e-88552ac24159-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e9d347b7-7c56-4a22-931e-88552ac24159" (UID: "e9d347b7-7c56-4a22-931e-88552ac24159"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:52:42 crc kubenswrapper[4747]: I1215 05:52:42.086729 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9d347b7-7c56-4a22-931e-88552ac24159-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e9d347b7-7c56-4a22-931e-88552ac24159" (UID: "e9d347b7-7c56-4a22-931e-88552ac24159"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:52:42 crc kubenswrapper[4747]: I1215 05:52:42.100665 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9d347b7-7c56-4a22-931e-88552ac24159-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e9d347b7-7c56-4a22-931e-88552ac24159" (UID: "e9d347b7-7c56-4a22-931e-88552ac24159"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:52:42 crc kubenswrapper[4747]: I1215 05:52:42.105408 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-fb9798db-mhqvv"] Dec 15 05:52:42 crc kubenswrapper[4747]: I1215 05:52:42.107480 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9d347b7-7c56-4a22-931e-88552ac24159-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e9d347b7-7c56-4a22-931e-88552ac24159" (UID: "e9d347b7-7c56-4a22-931e-88552ac24159"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:52:42 crc kubenswrapper[4747]: I1215 05:52:42.162013 4747 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9d347b7-7c56-4a22-931e-88552ac24159-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:42 crc kubenswrapper[4747]: I1215 05:52:42.162044 4747 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e9d347b7-7c56-4a22-931e-88552ac24159-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:42 crc kubenswrapper[4747]: I1215 05:52:42.162065 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9d347b7-7c56-4a22-931e-88552ac24159-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:42 crc kubenswrapper[4747]: I1215 05:52:42.162075 4747 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e9d347b7-7c56-4a22-931e-88552ac24159-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:42 crc kubenswrapper[4747]: I1215 05:52:42.162085 4747 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e9d347b7-7c56-4a22-931e-88552ac24159-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:42 crc kubenswrapper[4747]: I1215 05:52:42.332513 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6f4bb4686b-tlv2x"] Dec 15 05:52:42 crc kubenswrapper[4747]: W1215 05:52:42.338014 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded94a5c7_82af_48dc_8592_440d39a321f7.slice/crio-abe50a760da8f4fd81e6be4e90f89083232e0c9f217fb36d18be3293b4168797 WatchSource:0}: Error finding container abe50a760da8f4fd81e6be4e90f89083232e0c9f217fb36d18be3293b4168797: Status 404 returned error can't find the container with id abe50a760da8f4fd81e6be4e90f89083232e0c9f217fb36d18be3293b4168797 Dec 15 05:52:42 crc kubenswrapper[4747]: I1215 05:52:42.340080 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-9b996c647-vsbr7"] Dec 15 05:52:42 crc kubenswrapper[4747]: W1215 05:52:42.342996 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc262319_2445_42a7_9fb4_46f640216e00.slice/crio-51258cc039a6b021b220ff1eacaed36aec25a9d56cb901314737a98a87bb53af WatchSource:0}: Error finding container 51258cc039a6b021b220ff1eacaed36aec25a9d56cb901314737a98a87bb53af: Status 404 returned error can't find the container with id 51258cc039a6b021b220ff1eacaed36aec25a9d56cb901314737a98a87bb53af Dec 15 05:52:42 crc kubenswrapper[4747]: I1215 05:52:42.441976 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-796db5f74c-84jzt"] Dec 15 05:52:42 crc kubenswrapper[4747]: I1215 05:52:42.710725 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-796db5f74c-84jzt" event={"ID":"9e10c5cb-1a07-4941-885e-45dc50d15021","Type":"ContainerStarted","Data":"dc86bbb85d1a5e9664dbe165e6819c53e31c9f194068d15739383ccd03e92fd2"} Dec 15 05:52:42 crc kubenswrapper[4747]: I1215 05:52:42.715723 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f4bb4686b-tlv2x" event={"ID":"ed94a5c7-82af-48dc-8592-440d39a321f7","Type":"ContainerStarted","Data":"e61dac18edbba4af42f513e64558a8fd72646531c05e6d6bd83f7eb16203ba7a"} Dec 15 05:52:42 crc kubenswrapper[4747]: I1215 05:52:42.716059 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f4bb4686b-tlv2x" event={"ID":"ed94a5c7-82af-48dc-8592-440d39a321f7","Type":"ContainerStarted","Data":"abe50a760da8f4fd81e6be4e90f89083232e0c9f217fb36d18be3293b4168797"} Dec 15 05:52:42 crc kubenswrapper[4747]: I1215 05:52:42.719623 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5798cc97cf-fkkrr" event={"ID":"e9d347b7-7c56-4a22-931e-88552ac24159","Type":"ContainerDied","Data":"4842b76d8c4a50f7174b6b0672eb5ec7ca96240eeddfce8fee6de50fc6cac59d"} Dec 15 05:52:42 crc kubenswrapper[4747]: I1215 05:52:42.719683 4747 scope.go:117] "RemoveContainer" containerID="cfbdfe656ed85c86b90132e82fe57ed5a8073d9deafbf03db04c7e3403f95c59" Dec 15 05:52:42 crc kubenswrapper[4747]: I1215 05:52:42.719867 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5798cc97cf-fkkrr" Dec 15 05:52:42 crc kubenswrapper[4747]: I1215 05:52:42.724174 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6b5fccc9fc-25v6s" event={"ID":"3f0cf723-d247-4d37-95f2-2ba1318f3e27","Type":"ContainerStarted","Data":"f88deaffb00abbf9ad847e1e0d746ca48a9f2b0613737c6a57bbb60583474705"} Dec 15 05:52:42 crc kubenswrapper[4747]: I1215 05:52:42.724224 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6b5fccc9fc-25v6s" event={"ID":"3f0cf723-d247-4d37-95f2-2ba1318f3e27","Type":"ContainerStarted","Data":"f36fa78b5697572310c30ab11b75f614d2ce2fbb069971ba7c50980bf465ba02"} Dec 15 05:52:42 crc kubenswrapper[4747]: I1215 05:52:42.724315 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6b5fccc9fc-25v6s" Dec 15 05:52:42 crc kubenswrapper[4747]: I1215 05:52:42.729123 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-fb9798db-mhqvv" event={"ID":"5adecd4c-fd5a-4186-866f-2de0e4f9a859","Type":"ContainerStarted","Data":"838f7ddebc1a2343d7a04e8386bbd01c649c8c19457a8bd9219e73599d4c6d7e"} Dec 15 05:52:42 crc kubenswrapper[4747]: I1215 05:52:42.738282 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-9b996c647-vsbr7" event={"ID":"fc262319-2445-42a7-9fb4-46f640216e00","Type":"ContainerStarted","Data":"51258cc039a6b021b220ff1eacaed36aec25a9d56cb901314737a98a87bb53af"} Dec 15 05:52:42 crc kubenswrapper[4747]: I1215 05:52:42.741039 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5798cc97cf-fkkrr"] Dec 15 05:52:42 crc kubenswrapper[4747]: I1215 05:52:42.750608 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5798cc97cf-fkkrr"] Dec 15 05:52:42 crc kubenswrapper[4747]: I1215 05:52:42.755842 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6b5fccc9fc-25v6s" podStartSLOduration=2.755822393 podStartE2EDuration="2.755822393s" podCreationTimestamp="2025-12-15 05:52:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:52:42.754645059 +0000 UTC m=+926.451156975" watchObservedRunningTime="2025-12-15 05:52:42.755822393 +0000 UTC m=+926.452334309" Dec 15 05:52:42 crc kubenswrapper[4747]: I1215 05:52:42.763028 4747 scope.go:117] "RemoveContainer" containerID="f6a226bb3d7cecb5bb08753332040466d07d68a1efa5b7c8ffdfbc5744144c49" Dec 15 05:52:42 crc kubenswrapper[4747]: I1215 05:52:42.841706 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-25lpq" Dec 15 05:52:43 crc kubenswrapper[4747]: I1215 05:52:43.540472 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 15 05:52:43 crc kubenswrapper[4747]: I1215 05:52:43.749910 4747 generic.go:334] "Generic (PLEG): container finished" podID="9e10c5cb-1a07-4941-885e-45dc50d15021" containerID="0e5e3841371bb40dd5f2f2c13f223c9fd3f34055f264aaa575a93b32bb4001a5" exitCode=0 Dec 15 05:52:43 crc kubenswrapper[4747]: I1215 05:52:43.750028 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-796db5f74c-84jzt" event={"ID":"9e10c5cb-1a07-4941-885e-45dc50d15021","Type":"ContainerDied","Data":"0e5e3841371bb40dd5f2f2c13f223c9fd3f34055f264aaa575a93b32bb4001a5"} Dec 15 05:52:43 crc kubenswrapper[4747]: I1215 05:52:43.755973 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f4bb4686b-tlv2x" event={"ID":"ed94a5c7-82af-48dc-8592-440d39a321f7","Type":"ContainerStarted","Data":"e96e35defa31d4421e562086962fa2ef685429ede9b83540bcc32b54739cf01e"} Dec 15 05:52:43 crc kubenswrapper[4747]: I1215 05:52:43.756117 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6f4bb4686b-tlv2x" Dec 15 05:52:43 crc kubenswrapper[4747]: I1215 05:52:43.756197 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6f4bb4686b-tlv2x" Dec 15 05:52:43 crc kubenswrapper[4747]: I1215 05:52:43.757579 4747 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 15 05:52:43 crc kubenswrapper[4747]: I1215 05:52:43.757731 4747 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 15 05:52:43 crc kubenswrapper[4747]: I1215 05:52:43.757743 4747 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 15 05:52:43 crc kubenswrapper[4747]: I1215 05:52:43.776216 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-25lpq"] Dec 15 05:52:43 crc kubenswrapper[4747]: I1215 05:52:43.793389 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6f4bb4686b-tlv2x" podStartSLOduration=2.793370823 podStartE2EDuration="2.793370823s" podCreationTimestamp="2025-12-15 05:52:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:52:43.786887068 +0000 UTC m=+927.483398986" watchObservedRunningTime="2025-12-15 05:52:43.793370823 +0000 UTC m=+927.489882740" Dec 15 05:52:43 crc kubenswrapper[4747]: I1215 05:52:43.836692 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 15 05:52:43 crc kubenswrapper[4747]: I1215 05:52:43.934501 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-79fcd98c9d-ccgjm"] Dec 15 05:52:43 crc kubenswrapper[4747]: E1215 05:52:43.934879 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9d347b7-7c56-4a22-931e-88552ac24159" containerName="dnsmasq-dns" Dec 15 05:52:43 crc kubenswrapper[4747]: I1215 05:52:43.934902 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9d347b7-7c56-4a22-931e-88552ac24159" containerName="dnsmasq-dns" Dec 15 05:52:43 crc kubenswrapper[4747]: E1215 05:52:43.934918 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9d347b7-7c56-4a22-931e-88552ac24159" containerName="init" Dec 15 05:52:43 crc kubenswrapper[4747]: I1215 05:52:43.934939 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9d347b7-7c56-4a22-931e-88552ac24159" containerName="init" Dec 15 05:52:43 crc kubenswrapper[4747]: I1215 05:52:43.935184 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9d347b7-7c56-4a22-931e-88552ac24159" containerName="dnsmasq-dns" Dec 15 05:52:43 crc kubenswrapper[4747]: I1215 05:52:43.936172 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-79fcd98c9d-ccgjm" Dec 15 05:52:43 crc kubenswrapper[4747]: I1215 05:52:43.943170 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Dec 15 05:52:43 crc kubenswrapper[4747]: I1215 05:52:43.943351 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Dec 15 05:52:43 crc kubenswrapper[4747]: I1215 05:52:43.961907 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-79fcd98c9d-ccgjm"] Dec 15 05:52:43 crc kubenswrapper[4747]: I1215 05:52:43.996105 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 15 05:52:43 crc kubenswrapper[4747]: I1215 05:52:43.996400 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 15 05:52:44 crc kubenswrapper[4747]: I1215 05:52:44.009576 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecaeb0c4-ae67-4901-bc77-863b3a8c5c03-config-data\") pod \"barbican-api-79fcd98c9d-ccgjm\" (UID: \"ecaeb0c4-ae67-4901-bc77-863b3a8c5c03\") " pod="openstack/barbican-api-79fcd98c9d-ccgjm" Dec 15 05:52:44 crc kubenswrapper[4747]: I1215 05:52:44.009649 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecaeb0c4-ae67-4901-bc77-863b3a8c5c03-internal-tls-certs\") pod \"barbican-api-79fcd98c9d-ccgjm\" (UID: \"ecaeb0c4-ae67-4901-bc77-863b3a8c5c03\") " pod="openstack/barbican-api-79fcd98c9d-ccgjm" Dec 15 05:52:44 crc kubenswrapper[4747]: I1215 05:52:44.009677 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecaeb0c4-ae67-4901-bc77-863b3a8c5c03-public-tls-certs\") pod \"barbican-api-79fcd98c9d-ccgjm\" (UID: \"ecaeb0c4-ae67-4901-bc77-863b3a8c5c03\") " pod="openstack/barbican-api-79fcd98c9d-ccgjm" Dec 15 05:52:44 crc kubenswrapper[4747]: I1215 05:52:44.009772 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ecaeb0c4-ae67-4901-bc77-863b3a8c5c03-config-data-custom\") pod \"barbican-api-79fcd98c9d-ccgjm\" (UID: \"ecaeb0c4-ae67-4901-bc77-863b3a8c5c03\") " pod="openstack/barbican-api-79fcd98c9d-ccgjm" Dec 15 05:52:44 crc kubenswrapper[4747]: I1215 05:52:44.009801 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecaeb0c4-ae67-4901-bc77-863b3a8c5c03-logs\") pod \"barbican-api-79fcd98c9d-ccgjm\" (UID: \"ecaeb0c4-ae67-4901-bc77-863b3a8c5c03\") " pod="openstack/barbican-api-79fcd98c9d-ccgjm" Dec 15 05:52:44 crc kubenswrapper[4747]: I1215 05:52:44.009869 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecaeb0c4-ae67-4901-bc77-863b3a8c5c03-combined-ca-bundle\") pod \"barbican-api-79fcd98c9d-ccgjm\" (UID: \"ecaeb0c4-ae67-4901-bc77-863b3a8c5c03\") " pod="openstack/barbican-api-79fcd98c9d-ccgjm" Dec 15 05:52:44 crc kubenswrapper[4747]: I1215 05:52:44.009989 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqmp8\" (UniqueName: \"kubernetes.io/projected/ecaeb0c4-ae67-4901-bc77-863b3a8c5c03-kube-api-access-fqmp8\") pod \"barbican-api-79fcd98c9d-ccgjm\" (UID: \"ecaeb0c4-ae67-4901-bc77-863b3a8c5c03\") " pod="openstack/barbican-api-79fcd98c9d-ccgjm" Dec 15 05:52:44 crc kubenswrapper[4747]: I1215 05:52:44.111508 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecaeb0c4-ae67-4901-bc77-863b3a8c5c03-internal-tls-certs\") pod \"barbican-api-79fcd98c9d-ccgjm\" (UID: \"ecaeb0c4-ae67-4901-bc77-863b3a8c5c03\") " pod="openstack/barbican-api-79fcd98c9d-ccgjm" Dec 15 05:52:44 crc kubenswrapper[4747]: I1215 05:52:44.111810 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecaeb0c4-ae67-4901-bc77-863b3a8c5c03-public-tls-certs\") pod \"barbican-api-79fcd98c9d-ccgjm\" (UID: \"ecaeb0c4-ae67-4901-bc77-863b3a8c5c03\") " pod="openstack/barbican-api-79fcd98c9d-ccgjm" Dec 15 05:52:44 crc kubenswrapper[4747]: I1215 05:52:44.111867 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ecaeb0c4-ae67-4901-bc77-863b3a8c5c03-config-data-custom\") pod \"barbican-api-79fcd98c9d-ccgjm\" (UID: \"ecaeb0c4-ae67-4901-bc77-863b3a8c5c03\") " pod="openstack/barbican-api-79fcd98c9d-ccgjm" Dec 15 05:52:44 crc kubenswrapper[4747]: I1215 05:52:44.111891 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecaeb0c4-ae67-4901-bc77-863b3a8c5c03-logs\") pod \"barbican-api-79fcd98c9d-ccgjm\" (UID: \"ecaeb0c4-ae67-4901-bc77-863b3a8c5c03\") " pod="openstack/barbican-api-79fcd98c9d-ccgjm" Dec 15 05:52:44 crc kubenswrapper[4747]: I1215 05:52:44.111949 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecaeb0c4-ae67-4901-bc77-863b3a8c5c03-combined-ca-bundle\") pod \"barbican-api-79fcd98c9d-ccgjm\" (UID: \"ecaeb0c4-ae67-4901-bc77-863b3a8c5c03\") " pod="openstack/barbican-api-79fcd98c9d-ccgjm" Dec 15 05:52:44 crc kubenswrapper[4747]: I1215 05:52:44.112007 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqmp8\" (UniqueName: \"kubernetes.io/projected/ecaeb0c4-ae67-4901-bc77-863b3a8c5c03-kube-api-access-fqmp8\") pod \"barbican-api-79fcd98c9d-ccgjm\" (UID: \"ecaeb0c4-ae67-4901-bc77-863b3a8c5c03\") " pod="openstack/barbican-api-79fcd98c9d-ccgjm" Dec 15 05:52:44 crc kubenswrapper[4747]: I1215 05:52:44.112026 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecaeb0c4-ae67-4901-bc77-863b3a8c5c03-config-data\") pod \"barbican-api-79fcd98c9d-ccgjm\" (UID: \"ecaeb0c4-ae67-4901-bc77-863b3a8c5c03\") " pod="openstack/barbican-api-79fcd98c9d-ccgjm" Dec 15 05:52:44 crc kubenswrapper[4747]: I1215 05:52:44.113154 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecaeb0c4-ae67-4901-bc77-863b3a8c5c03-logs\") pod \"barbican-api-79fcd98c9d-ccgjm\" (UID: \"ecaeb0c4-ae67-4901-bc77-863b3a8c5c03\") " pod="openstack/barbican-api-79fcd98c9d-ccgjm" Dec 15 05:52:44 crc kubenswrapper[4747]: I1215 05:52:44.122912 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecaeb0c4-ae67-4901-bc77-863b3a8c5c03-public-tls-certs\") pod \"barbican-api-79fcd98c9d-ccgjm\" (UID: \"ecaeb0c4-ae67-4901-bc77-863b3a8c5c03\") " pod="openstack/barbican-api-79fcd98c9d-ccgjm" Dec 15 05:52:44 crc kubenswrapper[4747]: I1215 05:52:44.123367 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecaeb0c4-ae67-4901-bc77-863b3a8c5c03-internal-tls-certs\") pod \"barbican-api-79fcd98c9d-ccgjm\" (UID: \"ecaeb0c4-ae67-4901-bc77-863b3a8c5c03\") " pod="openstack/barbican-api-79fcd98c9d-ccgjm" Dec 15 05:52:44 crc kubenswrapper[4747]: I1215 05:52:44.125152 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecaeb0c4-ae67-4901-bc77-863b3a8c5c03-config-data\") pod \"barbican-api-79fcd98c9d-ccgjm\" (UID: \"ecaeb0c4-ae67-4901-bc77-863b3a8c5c03\") " pod="openstack/barbican-api-79fcd98c9d-ccgjm" Dec 15 05:52:44 crc kubenswrapper[4747]: I1215 05:52:44.125476 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecaeb0c4-ae67-4901-bc77-863b3a8c5c03-combined-ca-bundle\") pod \"barbican-api-79fcd98c9d-ccgjm\" (UID: \"ecaeb0c4-ae67-4901-bc77-863b3a8c5c03\") " pod="openstack/barbican-api-79fcd98c9d-ccgjm" Dec 15 05:52:44 crc kubenswrapper[4747]: I1215 05:52:44.141951 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ecaeb0c4-ae67-4901-bc77-863b3a8c5c03-config-data-custom\") pod \"barbican-api-79fcd98c9d-ccgjm\" (UID: \"ecaeb0c4-ae67-4901-bc77-863b3a8c5c03\") " pod="openstack/barbican-api-79fcd98c9d-ccgjm" Dec 15 05:52:44 crc kubenswrapper[4747]: I1215 05:52:44.145563 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqmp8\" (UniqueName: \"kubernetes.io/projected/ecaeb0c4-ae67-4901-bc77-863b3a8c5c03-kube-api-access-fqmp8\") pod \"barbican-api-79fcd98c9d-ccgjm\" (UID: \"ecaeb0c4-ae67-4901-bc77-863b3a8c5c03\") " pod="openstack/barbican-api-79fcd98c9d-ccgjm" Dec 15 05:52:44 crc kubenswrapper[4747]: I1215 05:52:44.272253 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-79fcd98c9d-ccgjm" Dec 15 05:52:44 crc kubenswrapper[4747]: I1215 05:52:44.640861 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9d347b7-7c56-4a22-931e-88552ac24159" path="/var/lib/kubelet/pods/e9d347b7-7c56-4a22-931e-88552ac24159/volumes" Dec 15 05:52:44 crc kubenswrapper[4747]: I1215 05:52:44.771915 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-25lpq" podUID="656ef4f9-8e82-43ce-b0f9-b654bcecb12a" containerName="registry-server" containerID="cri-o://722649aebf439cff5f3e65b76a739e8a8d7479c5072467c97f6e8da77ae71b08" gracePeriod=2 Dec 15 05:52:45 crc kubenswrapper[4747]: I1215 05:52:45.117748 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-79fcd98c9d-ccgjm"] Dec 15 05:52:45 crc kubenswrapper[4747]: I1215 05:52:45.373213 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-25lpq" Dec 15 05:52:45 crc kubenswrapper[4747]: I1215 05:52:45.540898 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/656ef4f9-8e82-43ce-b0f9-b654bcecb12a-catalog-content\") pod \"656ef4f9-8e82-43ce-b0f9-b654bcecb12a\" (UID: \"656ef4f9-8e82-43ce-b0f9-b654bcecb12a\") " Dec 15 05:52:45 crc kubenswrapper[4747]: I1215 05:52:45.541253 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/656ef4f9-8e82-43ce-b0f9-b654bcecb12a-utilities\") pod \"656ef4f9-8e82-43ce-b0f9-b654bcecb12a\" (UID: \"656ef4f9-8e82-43ce-b0f9-b654bcecb12a\") " Dec 15 05:52:45 crc kubenswrapper[4747]: I1215 05:52:45.541496 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74hjh\" (UniqueName: \"kubernetes.io/projected/656ef4f9-8e82-43ce-b0f9-b654bcecb12a-kube-api-access-74hjh\") pod \"656ef4f9-8e82-43ce-b0f9-b654bcecb12a\" (UID: \"656ef4f9-8e82-43ce-b0f9-b654bcecb12a\") " Dec 15 05:52:45 crc kubenswrapper[4747]: I1215 05:52:45.542826 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/656ef4f9-8e82-43ce-b0f9-b654bcecb12a-utilities" (OuterVolumeSpecName: "utilities") pod "656ef4f9-8e82-43ce-b0f9-b654bcecb12a" (UID: "656ef4f9-8e82-43ce-b0f9-b654bcecb12a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:52:45 crc kubenswrapper[4747]: I1215 05:52:45.550085 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/656ef4f9-8e82-43ce-b0f9-b654bcecb12a-kube-api-access-74hjh" (OuterVolumeSpecName: "kube-api-access-74hjh") pod "656ef4f9-8e82-43ce-b0f9-b654bcecb12a" (UID: "656ef4f9-8e82-43ce-b0f9-b654bcecb12a"). InnerVolumeSpecName "kube-api-access-74hjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:52:45 crc kubenswrapper[4747]: I1215 05:52:45.592513 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/656ef4f9-8e82-43ce-b0f9-b654bcecb12a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "656ef4f9-8e82-43ce-b0f9-b654bcecb12a" (UID: "656ef4f9-8e82-43ce-b0f9-b654bcecb12a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:52:45 crc kubenswrapper[4747]: I1215 05:52:45.644050 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74hjh\" (UniqueName: \"kubernetes.io/projected/656ef4f9-8e82-43ce-b0f9-b654bcecb12a-kube-api-access-74hjh\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:45 crc kubenswrapper[4747]: I1215 05:52:45.644082 4747 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/656ef4f9-8e82-43ce-b0f9-b654bcecb12a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:45 crc kubenswrapper[4747]: I1215 05:52:45.644094 4747 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/656ef4f9-8e82-43ce-b0f9-b654bcecb12a-utilities\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:45 crc kubenswrapper[4747]: I1215 05:52:45.790603 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-796db5f74c-84jzt" event={"ID":"9e10c5cb-1a07-4941-885e-45dc50d15021","Type":"ContainerStarted","Data":"cf00726260f95e81b5468fd976269014fb809fb1ff6c1dc766ef2e281b331c15"} Dec 15 05:52:45 crc kubenswrapper[4747]: I1215 05:52:45.790692 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-796db5f74c-84jzt" Dec 15 05:52:45 crc kubenswrapper[4747]: I1215 05:52:45.797062 4747 generic.go:334] "Generic (PLEG): container finished" podID="656ef4f9-8e82-43ce-b0f9-b654bcecb12a" containerID="722649aebf439cff5f3e65b76a739e8a8d7479c5072467c97f6e8da77ae71b08" exitCode=0 Dec 15 05:52:45 crc kubenswrapper[4747]: I1215 05:52:45.797354 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25lpq" event={"ID":"656ef4f9-8e82-43ce-b0f9-b654bcecb12a","Type":"ContainerDied","Data":"722649aebf439cff5f3e65b76a739e8a8d7479c5072467c97f6e8da77ae71b08"} Dec 15 05:52:45 crc kubenswrapper[4747]: I1215 05:52:45.797445 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25lpq" event={"ID":"656ef4f9-8e82-43ce-b0f9-b654bcecb12a","Type":"ContainerDied","Data":"d615cc1eab2d41d6c377647054e79ac3ebc09a0676a36bdb9680715b498ad64f"} Dec 15 05:52:45 crc kubenswrapper[4747]: I1215 05:52:45.797475 4747 scope.go:117] "RemoveContainer" containerID="722649aebf439cff5f3e65b76a739e8a8d7479c5072467c97f6e8da77ae71b08" Dec 15 05:52:45 crc kubenswrapper[4747]: I1215 05:52:45.797369 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-25lpq" Dec 15 05:52:45 crc kubenswrapper[4747]: I1215 05:52:45.805055 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79fcd98c9d-ccgjm" event={"ID":"ecaeb0c4-ae67-4901-bc77-863b3a8c5c03","Type":"ContainerStarted","Data":"1a34c83a444c0aff85429466d6dc8df572b1c9451f6f958d16e6f7bfdc761ce8"} Dec 15 05:52:45 crc kubenswrapper[4747]: I1215 05:52:45.805095 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79fcd98c9d-ccgjm" event={"ID":"ecaeb0c4-ae67-4901-bc77-863b3a8c5c03","Type":"ContainerStarted","Data":"62c47df80968eebf1311688ea1a96d623f8fe10451a22647744ed4211b35ccae"} Dec 15 05:52:45 crc kubenswrapper[4747]: I1215 05:52:45.812254 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-fb9798db-mhqvv" event={"ID":"5adecd4c-fd5a-4186-866f-2de0e4f9a859","Type":"ContainerStarted","Data":"de3f3ede706737917c1c06af115ea58a362cd9f7b2986422490dab0328de20d0"} Dec 15 05:52:45 crc kubenswrapper[4747]: I1215 05:52:45.812358 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-fb9798db-mhqvv" event={"ID":"5adecd4c-fd5a-4186-866f-2de0e4f9a859","Type":"ContainerStarted","Data":"f285984e516d851084633a960f674ea52e07402a19bb033a1813476799496e01"} Dec 15 05:52:45 crc kubenswrapper[4747]: I1215 05:52:45.816757 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-9b996c647-vsbr7" event={"ID":"fc262319-2445-42a7-9fb4-46f640216e00","Type":"ContainerStarted","Data":"d1eaabb587b3f72993af4cd454514ee190e3d3a4a3def33d4bb56179f41e4be5"} Dec 15 05:52:45 crc kubenswrapper[4747]: I1215 05:52:45.816847 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-9b996c647-vsbr7" event={"ID":"fc262319-2445-42a7-9fb4-46f640216e00","Type":"ContainerStarted","Data":"fcdb50be5670b473b2a102a63677f09badbeb0bca1333c81ca73fe8c2ff80387"} Dec 15 05:52:45 crc kubenswrapper[4747]: I1215 05:52:45.825883 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-796db5f74c-84jzt" podStartSLOduration=4.8258684689999995 podStartE2EDuration="4.825868469s" podCreationTimestamp="2025-12-15 05:52:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:52:45.813457098 +0000 UTC m=+929.509969015" watchObservedRunningTime="2025-12-15 05:52:45.825868469 +0000 UTC m=+929.522380386" Dec 15 05:52:45 crc kubenswrapper[4747]: I1215 05:52:45.832968 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-fb9798db-mhqvv" podStartSLOduration=3.293358988 podStartE2EDuration="5.832955677s" podCreationTimestamp="2025-12-15 05:52:40 +0000 UTC" firstStartedPulling="2025-12-15 05:52:42.127955621 +0000 UTC m=+925.824467538" lastFinishedPulling="2025-12-15 05:52:44.66755231 +0000 UTC m=+928.364064227" observedRunningTime="2025-12-15 05:52:45.828845398 +0000 UTC m=+929.525357314" watchObservedRunningTime="2025-12-15 05:52:45.832955677 +0000 UTC m=+929.529467594" Dec 15 05:52:45 crc kubenswrapper[4747]: I1215 05:52:45.852986 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-9b996c647-vsbr7" podStartSLOduration=2.5195163259999998 podStartE2EDuration="4.85297113s" podCreationTimestamp="2025-12-15 05:52:41 +0000 UTC" firstStartedPulling="2025-12-15 05:52:42.347623393 +0000 UTC m=+926.044135310" lastFinishedPulling="2025-12-15 05:52:44.681078198 +0000 UTC m=+928.377590114" observedRunningTime="2025-12-15 05:52:45.846917966 +0000 UTC m=+929.543429883" watchObservedRunningTime="2025-12-15 05:52:45.85297113 +0000 UTC m=+929.549483046" Dec 15 05:52:45 crc kubenswrapper[4747]: I1215 05:52:45.874123 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-25lpq"] Dec 15 05:52:45 crc kubenswrapper[4747]: I1215 05:52:45.882345 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-25lpq"] Dec 15 05:52:46 crc kubenswrapper[4747]: I1215 05:52:46.648416 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="656ef4f9-8e82-43ce-b0f9-b654bcecb12a" path="/var/lib/kubelet/pods/656ef4f9-8e82-43ce-b0f9-b654bcecb12a/volumes" Dec 15 05:52:46 crc kubenswrapper[4747]: I1215 05:52:46.734031 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5798cc97cf-fkkrr" podUID="e9d347b7-7c56-4a22-931e-88552ac24159" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.146:5353: i/o timeout" Dec 15 05:52:47 crc kubenswrapper[4747]: I1215 05:52:47.086953 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5tm7d" Dec 15 05:52:47 crc kubenswrapper[4747]: I1215 05:52:47.087071 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5tm7d" Dec 15 05:52:47 crc kubenswrapper[4747]: I1215 05:52:47.124378 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5tm7d" Dec 15 05:52:47 crc kubenswrapper[4747]: I1215 05:52:47.878801 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5tm7d" Dec 15 05:52:48 crc kubenswrapper[4747]: I1215 05:52:48.349788 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-84m7w" Dec 15 05:52:48 crc kubenswrapper[4747]: I1215 05:52:48.399570 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-84m7w" Dec 15 05:52:48 crc kubenswrapper[4747]: I1215 05:52:48.977653 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5tm7d"] Dec 15 05:52:49 crc kubenswrapper[4747]: I1215 05:52:49.881772 4747 scope.go:117] "RemoveContainer" containerID="b1b4e5ec228894434b1be940ea3fa0b3387bd126bee48d1e2b2895dd9b621ace" Dec 15 05:52:49 crc kubenswrapper[4747]: I1215 05:52:49.926945 4747 scope.go:117] "RemoveContainer" containerID="78644837d62eade3980ba2fac1f320a2ff11b1f64ee916f7110386745b8a679c" Dec 15 05:52:49 crc kubenswrapper[4747]: I1215 05:52:49.955223 4747 scope.go:117] "RemoveContainer" containerID="722649aebf439cff5f3e65b76a739e8a8d7479c5072467c97f6e8da77ae71b08" Dec 15 05:52:49 crc kubenswrapper[4747]: E1215 05:52:49.955570 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"722649aebf439cff5f3e65b76a739e8a8d7479c5072467c97f6e8da77ae71b08\": container with ID starting with 722649aebf439cff5f3e65b76a739e8a8d7479c5072467c97f6e8da77ae71b08 not found: ID does not exist" containerID="722649aebf439cff5f3e65b76a739e8a8d7479c5072467c97f6e8da77ae71b08" Dec 15 05:52:49 crc kubenswrapper[4747]: I1215 05:52:49.955606 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"722649aebf439cff5f3e65b76a739e8a8d7479c5072467c97f6e8da77ae71b08"} err="failed to get container status \"722649aebf439cff5f3e65b76a739e8a8d7479c5072467c97f6e8da77ae71b08\": rpc error: code = NotFound desc = could not find container \"722649aebf439cff5f3e65b76a739e8a8d7479c5072467c97f6e8da77ae71b08\": container with ID starting with 722649aebf439cff5f3e65b76a739e8a8d7479c5072467c97f6e8da77ae71b08 not found: ID does not exist" Dec 15 05:52:49 crc kubenswrapper[4747]: I1215 05:52:49.955630 4747 scope.go:117] "RemoveContainer" containerID="b1b4e5ec228894434b1be940ea3fa0b3387bd126bee48d1e2b2895dd9b621ace" Dec 15 05:52:49 crc kubenswrapper[4747]: E1215 05:52:49.955910 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1b4e5ec228894434b1be940ea3fa0b3387bd126bee48d1e2b2895dd9b621ace\": container with ID starting with b1b4e5ec228894434b1be940ea3fa0b3387bd126bee48d1e2b2895dd9b621ace not found: ID does not exist" containerID="b1b4e5ec228894434b1be940ea3fa0b3387bd126bee48d1e2b2895dd9b621ace" Dec 15 05:52:49 crc kubenswrapper[4747]: I1215 05:52:49.955946 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1b4e5ec228894434b1be940ea3fa0b3387bd126bee48d1e2b2895dd9b621ace"} err="failed to get container status \"b1b4e5ec228894434b1be940ea3fa0b3387bd126bee48d1e2b2895dd9b621ace\": rpc error: code = NotFound desc = could not find container \"b1b4e5ec228894434b1be940ea3fa0b3387bd126bee48d1e2b2895dd9b621ace\": container with ID starting with b1b4e5ec228894434b1be940ea3fa0b3387bd126bee48d1e2b2895dd9b621ace not found: ID does not exist" Dec 15 05:52:49 crc kubenswrapper[4747]: I1215 05:52:49.955962 4747 scope.go:117] "RemoveContainer" containerID="78644837d62eade3980ba2fac1f320a2ff11b1f64ee916f7110386745b8a679c" Dec 15 05:52:49 crc kubenswrapper[4747]: E1215 05:52:49.956314 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78644837d62eade3980ba2fac1f320a2ff11b1f64ee916f7110386745b8a679c\": container with ID starting with 78644837d62eade3980ba2fac1f320a2ff11b1f64ee916f7110386745b8a679c not found: ID does not exist" containerID="78644837d62eade3980ba2fac1f320a2ff11b1f64ee916f7110386745b8a679c" Dec 15 05:52:49 crc kubenswrapper[4747]: I1215 05:52:49.956342 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78644837d62eade3980ba2fac1f320a2ff11b1f64ee916f7110386745b8a679c"} err="failed to get container status \"78644837d62eade3980ba2fac1f320a2ff11b1f64ee916f7110386745b8a679c\": rpc error: code = NotFound desc = could not find container \"78644837d62eade3980ba2fac1f320a2ff11b1f64ee916f7110386745b8a679c\": container with ID starting with 78644837d62eade3980ba2fac1f320a2ff11b1f64ee916f7110386745b8a679c not found: ID does not exist" Dec 15 05:52:50 crc kubenswrapper[4747]: I1215 05:52:50.883285 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79fcd98c9d-ccgjm" event={"ID":"ecaeb0c4-ae67-4901-bc77-863b3a8c5c03","Type":"ContainerStarted","Data":"443f4b835c4533c33f899246401278530b5bc469535797c50ddc2d5ec38c8c7e"} Dec 15 05:52:50 crc kubenswrapper[4747]: I1215 05:52:50.884736 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-79fcd98c9d-ccgjm" Dec 15 05:52:50 crc kubenswrapper[4747]: I1215 05:52:50.885087 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-79fcd98c9d-ccgjm" Dec 15 05:52:50 crc kubenswrapper[4747]: I1215 05:52:50.902131 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"873188a0-9dbb-4c95-b39e-cd503e07e59f","Type":"ContainerStarted","Data":"14cd76688175940d3ddf150fc4350378f3e639ba41947551102599e4dffb899f"} Dec 15 05:52:50 crc kubenswrapper[4747]: I1215 05:52:50.902252 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="873188a0-9dbb-4c95-b39e-cd503e07e59f" containerName="ceilometer-central-agent" containerID="cri-o://8dc5ef3d40e3025d26d6edb7c1e9ee191c80cb1f496fb2b91f4aed5be6317712" gracePeriod=30 Dec 15 05:52:50 crc kubenswrapper[4747]: I1215 05:52:50.902397 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 15 05:52:50 crc kubenswrapper[4747]: I1215 05:52:50.903168 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="873188a0-9dbb-4c95-b39e-cd503e07e59f" containerName="sg-core" containerID="cri-o://f4e96925bcfa06c1f0423d51300c6d70ba8b0eaceb34083e2095061f09f56e24" gracePeriod=30 Dec 15 05:52:50 crc kubenswrapper[4747]: I1215 05:52:50.903203 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="873188a0-9dbb-4c95-b39e-cd503e07e59f" containerName="proxy-httpd" containerID="cri-o://14cd76688175940d3ddf150fc4350378f3e639ba41947551102599e4dffb899f" gracePeriod=30 Dec 15 05:52:50 crc kubenswrapper[4747]: I1215 05:52:50.903214 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="873188a0-9dbb-4c95-b39e-cd503e07e59f" containerName="ceilometer-notification-agent" containerID="cri-o://c70b4cdb307a145485d475fb474f8b91fb623356651dc61ade264fe0001cb4f8" gracePeriod=30 Dec 15 05:52:50 crc kubenswrapper[4747]: I1215 05:52:50.909915 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-79fcd98c9d-ccgjm" podStartSLOduration=7.909899006 podStartE2EDuration="7.909899006s" podCreationTimestamp="2025-12-15 05:52:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:52:50.902097424 +0000 UTC m=+934.598609341" watchObservedRunningTime="2025-12-15 05:52:50.909899006 +0000 UTC m=+934.606410912" Dec 15 05:52:50 crc kubenswrapper[4747]: I1215 05:52:50.918010 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4hlv6" event={"ID":"2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6","Type":"ContainerStarted","Data":"f248b70f5c2337578dfbc201e5e7e8dc36eef21d39d5ee2844f954cf6807cf8b"} Dec 15 05:52:50 crc kubenswrapper[4747]: I1215 05:52:50.918044 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5tm7d" podUID="7aa44578-a974-4c1f-90db-014ecf544678" containerName="registry-server" containerID="cri-o://85d812e3d1f7bc97ecfe41c52a65a3a2b62bfb65ac6b2bd6e8322b67c7dac138" gracePeriod=2 Dec 15 05:52:50 crc kubenswrapper[4747]: I1215 05:52:50.929234 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.011843504 podStartE2EDuration="45.929220071s" podCreationTimestamp="2025-12-15 05:52:05 +0000 UTC" firstStartedPulling="2025-12-15 05:52:06.038762172 +0000 UTC m=+889.735274090" lastFinishedPulling="2025-12-15 05:52:49.95613874 +0000 UTC m=+933.652650657" observedRunningTime="2025-12-15 05:52:50.91824966 +0000 UTC m=+934.614761567" watchObservedRunningTime="2025-12-15 05:52:50.929220071 +0000 UTC m=+934.625731988" Dec 15 05:52:50 crc kubenswrapper[4747]: I1215 05:52:50.941073 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-4hlv6" podStartSLOduration=2.071926427 podStartE2EDuration="45.941060449s" podCreationTimestamp="2025-12-15 05:52:05 +0000 UTC" firstStartedPulling="2025-12-15 05:52:06.062308675 +0000 UTC m=+889.758820592" lastFinishedPulling="2025-12-15 05:52:49.931442696 +0000 UTC m=+933.627954614" observedRunningTime="2025-12-15 05:52:50.940601806 +0000 UTC m=+934.637113724" watchObservedRunningTime="2025-12-15 05:52:50.941060449 +0000 UTC m=+934.637572366" Dec 15 05:52:51 crc kubenswrapper[4747]: I1215 05:52:51.376822 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-84m7w"] Dec 15 05:52:51 crc kubenswrapper[4747]: I1215 05:52:51.377503 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-84m7w" podUID="e8515fd4-ce21-4f89-a703-e3807fa6fd90" containerName="registry-server" containerID="cri-o://850f93d17e36f8d99142b261f1ca55952d08b8c04196c71a24c28c350ff612bd" gracePeriod=2 Dec 15 05:52:51 crc kubenswrapper[4747]: I1215 05:52:51.390466 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5tm7d" Dec 15 05:52:51 crc kubenswrapper[4747]: I1215 05:52:51.499104 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aa44578-a974-4c1f-90db-014ecf544678-catalog-content\") pod \"7aa44578-a974-4c1f-90db-014ecf544678\" (UID: \"7aa44578-a974-4c1f-90db-014ecf544678\") " Dec 15 05:52:51 crc kubenswrapper[4747]: I1215 05:52:51.499538 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wm6zg\" (UniqueName: \"kubernetes.io/projected/7aa44578-a974-4c1f-90db-014ecf544678-kube-api-access-wm6zg\") pod \"7aa44578-a974-4c1f-90db-014ecf544678\" (UID: \"7aa44578-a974-4c1f-90db-014ecf544678\") " Dec 15 05:52:51 crc kubenswrapper[4747]: I1215 05:52:51.499677 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aa44578-a974-4c1f-90db-014ecf544678-utilities\") pod \"7aa44578-a974-4c1f-90db-014ecf544678\" (UID: \"7aa44578-a974-4c1f-90db-014ecf544678\") " Dec 15 05:52:51 crc kubenswrapper[4747]: I1215 05:52:51.500452 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7aa44578-a974-4c1f-90db-014ecf544678-utilities" (OuterVolumeSpecName: "utilities") pod "7aa44578-a974-4c1f-90db-014ecf544678" (UID: "7aa44578-a974-4c1f-90db-014ecf544678"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:52:51 crc kubenswrapper[4747]: I1215 05:52:51.506316 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7aa44578-a974-4c1f-90db-014ecf544678-kube-api-access-wm6zg" (OuterVolumeSpecName: "kube-api-access-wm6zg") pod "7aa44578-a974-4c1f-90db-014ecf544678" (UID: "7aa44578-a974-4c1f-90db-014ecf544678"). InnerVolumeSpecName "kube-api-access-wm6zg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:52:51 crc kubenswrapper[4747]: I1215 05:52:51.604669 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wm6zg\" (UniqueName: \"kubernetes.io/projected/7aa44578-a974-4c1f-90db-014ecf544678-kube-api-access-wm6zg\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:51 crc kubenswrapper[4747]: I1215 05:52:51.604709 4747 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aa44578-a974-4c1f-90db-014ecf544678-utilities\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:51 crc kubenswrapper[4747]: I1215 05:52:51.608831 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7aa44578-a974-4c1f-90db-014ecf544678-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7aa44578-a974-4c1f-90db-014ecf544678" (UID: "7aa44578-a974-4c1f-90db-014ecf544678"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:52:51 crc kubenswrapper[4747]: I1215 05:52:51.616134 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-796db5f74c-84jzt" Dec 15 05:52:51 crc kubenswrapper[4747]: I1215 05:52:51.692846 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d95b78fc9-pqwbl"] Dec 15 05:52:51 crc kubenswrapper[4747]: I1215 05:52:51.693094 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d95b78fc9-pqwbl" podUID="a57fd65d-825d-4fd1-a5e6-ab244093e1b8" containerName="dnsmasq-dns" containerID="cri-o://e881a4c63975aef39ce50f2a5c86af3367eef09fecf88f411f006760b4acaf8d" gracePeriod=10 Dec 15 05:52:51 crc kubenswrapper[4747]: I1215 05:52:51.717727 4747 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aa44578-a974-4c1f-90db-014ecf544678-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:51 crc kubenswrapper[4747]: I1215 05:52:51.814375 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-77755588cd-rgjzw" Dec 15 05:52:51 crc kubenswrapper[4747]: I1215 05:52:51.844854 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-84m7w" Dec 15 05:52:51 crc kubenswrapper[4747]: I1215 05:52:51.920955 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8515fd4-ce21-4f89-a703-e3807fa6fd90-utilities\") pod \"e8515fd4-ce21-4f89-a703-e3807fa6fd90\" (UID: \"e8515fd4-ce21-4f89-a703-e3807fa6fd90\") " Dec 15 05:52:51 crc kubenswrapper[4747]: I1215 05:52:51.921019 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8515fd4-ce21-4f89-a703-e3807fa6fd90-catalog-content\") pod \"e8515fd4-ce21-4f89-a703-e3807fa6fd90\" (UID: \"e8515fd4-ce21-4f89-a703-e3807fa6fd90\") " Dec 15 05:52:51 crc kubenswrapper[4747]: I1215 05:52:51.921123 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cm446\" (UniqueName: \"kubernetes.io/projected/e8515fd4-ce21-4f89-a703-e3807fa6fd90-kube-api-access-cm446\") pod \"e8515fd4-ce21-4f89-a703-e3807fa6fd90\" (UID: \"e8515fd4-ce21-4f89-a703-e3807fa6fd90\") " Dec 15 05:52:51 crc kubenswrapper[4747]: I1215 05:52:51.935488 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8515fd4-ce21-4f89-a703-e3807fa6fd90-utilities" (OuterVolumeSpecName: "utilities") pod "e8515fd4-ce21-4f89-a703-e3807fa6fd90" (UID: "e8515fd4-ce21-4f89-a703-e3807fa6fd90"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:52:51 crc kubenswrapper[4747]: I1215 05:52:51.935704 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8515fd4-ce21-4f89-a703-e3807fa6fd90-kube-api-access-cm446" (OuterVolumeSpecName: "kube-api-access-cm446") pod "e8515fd4-ce21-4f89-a703-e3807fa6fd90" (UID: "e8515fd4-ce21-4f89-a703-e3807fa6fd90"). InnerVolumeSpecName "kube-api-access-cm446". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:52:51 crc kubenswrapper[4747]: I1215 05:52:51.955226 4747 generic.go:334] "Generic (PLEG): container finished" podID="e8515fd4-ce21-4f89-a703-e3807fa6fd90" containerID="850f93d17e36f8d99142b261f1ca55952d08b8c04196c71a24c28c350ff612bd" exitCode=0 Dec 15 05:52:51 crc kubenswrapper[4747]: I1215 05:52:51.955296 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84m7w" event={"ID":"e8515fd4-ce21-4f89-a703-e3807fa6fd90","Type":"ContainerDied","Data":"850f93d17e36f8d99142b261f1ca55952d08b8c04196c71a24c28c350ff612bd"} Dec 15 05:52:51 crc kubenswrapper[4747]: I1215 05:52:51.955328 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84m7w" event={"ID":"e8515fd4-ce21-4f89-a703-e3807fa6fd90","Type":"ContainerDied","Data":"2d9c4e83ee7fffbc698983690f48fa8e0961af61f08128cb582d5a7f521a8b4a"} Dec 15 05:52:51 crc kubenswrapper[4747]: I1215 05:52:51.955350 4747 scope.go:117] "RemoveContainer" containerID="850f93d17e36f8d99142b261f1ca55952d08b8c04196c71a24c28c350ff612bd" Dec 15 05:52:51 crc kubenswrapper[4747]: I1215 05:52:51.955459 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-84m7w" Dec 15 05:52:51 crc kubenswrapper[4747]: I1215 05:52:51.967662 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8515fd4-ce21-4f89-a703-e3807fa6fd90-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e8515fd4-ce21-4f89-a703-e3807fa6fd90" (UID: "e8515fd4-ce21-4f89-a703-e3807fa6fd90"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.026174 4747 generic.go:334] "Generic (PLEG): container finished" podID="a57fd65d-825d-4fd1-a5e6-ab244093e1b8" containerID="e881a4c63975aef39ce50f2a5c86af3367eef09fecf88f411f006760b4acaf8d" exitCode=0 Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.026365 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d95b78fc9-pqwbl" event={"ID":"a57fd65d-825d-4fd1-a5e6-ab244093e1b8","Type":"ContainerDied","Data":"e881a4c63975aef39ce50f2a5c86af3367eef09fecf88f411f006760b4acaf8d"} Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.040225 4747 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8515fd4-ce21-4f89-a703-e3807fa6fd90-utilities\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.040253 4747 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8515fd4-ce21-4f89-a703-e3807fa6fd90-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.040268 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cm446\" (UniqueName: \"kubernetes.io/projected/e8515fd4-ce21-4f89-a703-e3807fa6fd90-kube-api-access-cm446\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.067127 4747 scope.go:117] "RemoveContainer" containerID="be2b4f80044db2a33a65ac2700a43fb24fe9df3d40b1aae0c63903743706a042" Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.093781 4747 generic.go:334] "Generic (PLEG): container finished" podID="7aa44578-a974-4c1f-90db-014ecf544678" containerID="85d812e3d1f7bc97ecfe41c52a65a3a2b62bfb65ac6b2bd6e8322b67c7dac138" exitCode=0 Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.093889 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tm7d" event={"ID":"7aa44578-a974-4c1f-90db-014ecf544678","Type":"ContainerDied","Data":"85d812e3d1f7bc97ecfe41c52a65a3a2b62bfb65ac6b2bd6e8322b67c7dac138"} Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.093936 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tm7d" event={"ID":"7aa44578-a974-4c1f-90db-014ecf544678","Type":"ContainerDied","Data":"8c6843bd23ea75a562c8a3bfcfada602d25bfd8807bd52821633bee358f5b150"} Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.094025 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5tm7d" Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.150026 4747 scope.go:117] "RemoveContainer" containerID="ac9e6142f75de10b94be19dc75729a1cac053a1cb9c59aa4f191b74ed95b1d1b" Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.168101 4747 generic.go:334] "Generic (PLEG): container finished" podID="873188a0-9dbb-4c95-b39e-cd503e07e59f" containerID="14cd76688175940d3ddf150fc4350378f3e639ba41947551102599e4dffb899f" exitCode=0 Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.168131 4747 generic.go:334] "Generic (PLEG): container finished" podID="873188a0-9dbb-4c95-b39e-cd503e07e59f" containerID="f4e96925bcfa06c1f0423d51300c6d70ba8b0eaceb34083e2095061f09f56e24" exitCode=2 Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.168140 4747 generic.go:334] "Generic (PLEG): container finished" podID="873188a0-9dbb-4c95-b39e-cd503e07e59f" containerID="8dc5ef3d40e3025d26d6edb7c1e9ee191c80cb1f496fb2b91f4aed5be6317712" exitCode=0 Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.168236 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"873188a0-9dbb-4c95-b39e-cd503e07e59f","Type":"ContainerDied","Data":"14cd76688175940d3ddf150fc4350378f3e639ba41947551102599e4dffb899f"} Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.168273 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"873188a0-9dbb-4c95-b39e-cd503e07e59f","Type":"ContainerDied","Data":"f4e96925bcfa06c1f0423d51300c6d70ba8b0eaceb34083e2095061f09f56e24"} Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.168284 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"873188a0-9dbb-4c95-b39e-cd503e07e59f","Type":"ContainerDied","Data":"8dc5ef3d40e3025d26d6edb7c1e9ee191c80cb1f496fb2b91f4aed5be6317712"} Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.204276 4747 scope.go:117] "RemoveContainer" containerID="850f93d17e36f8d99142b261f1ca55952d08b8c04196c71a24c28c350ff612bd" Dec 15 05:52:52 crc kubenswrapper[4747]: E1215 05:52:52.206307 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"850f93d17e36f8d99142b261f1ca55952d08b8c04196c71a24c28c350ff612bd\": container with ID starting with 850f93d17e36f8d99142b261f1ca55952d08b8c04196c71a24c28c350ff612bd not found: ID does not exist" containerID="850f93d17e36f8d99142b261f1ca55952d08b8c04196c71a24c28c350ff612bd" Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.206440 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"850f93d17e36f8d99142b261f1ca55952d08b8c04196c71a24c28c350ff612bd"} err="failed to get container status \"850f93d17e36f8d99142b261f1ca55952d08b8c04196c71a24c28c350ff612bd\": rpc error: code = NotFound desc = could not find container \"850f93d17e36f8d99142b261f1ca55952d08b8c04196c71a24c28c350ff612bd\": container with ID starting with 850f93d17e36f8d99142b261f1ca55952d08b8c04196c71a24c28c350ff612bd not found: ID does not exist" Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.206525 4747 scope.go:117] "RemoveContainer" containerID="be2b4f80044db2a33a65ac2700a43fb24fe9df3d40b1aae0c63903743706a042" Dec 15 05:52:52 crc kubenswrapper[4747]: E1215 05:52:52.208309 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be2b4f80044db2a33a65ac2700a43fb24fe9df3d40b1aae0c63903743706a042\": container with ID starting with be2b4f80044db2a33a65ac2700a43fb24fe9df3d40b1aae0c63903743706a042 not found: ID does not exist" containerID="be2b4f80044db2a33a65ac2700a43fb24fe9df3d40b1aae0c63903743706a042" Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.208388 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be2b4f80044db2a33a65ac2700a43fb24fe9df3d40b1aae0c63903743706a042"} err="failed to get container status \"be2b4f80044db2a33a65ac2700a43fb24fe9df3d40b1aae0c63903743706a042\": rpc error: code = NotFound desc = could not find container \"be2b4f80044db2a33a65ac2700a43fb24fe9df3d40b1aae0c63903743706a042\": container with ID starting with be2b4f80044db2a33a65ac2700a43fb24fe9df3d40b1aae0c63903743706a042 not found: ID does not exist" Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.208463 4747 scope.go:117] "RemoveContainer" containerID="ac9e6142f75de10b94be19dc75729a1cac053a1cb9c59aa4f191b74ed95b1d1b" Dec 15 05:52:52 crc kubenswrapper[4747]: E1215 05:52:52.212019 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac9e6142f75de10b94be19dc75729a1cac053a1cb9c59aa4f191b74ed95b1d1b\": container with ID starting with ac9e6142f75de10b94be19dc75729a1cac053a1cb9c59aa4f191b74ed95b1d1b not found: ID does not exist" containerID="ac9e6142f75de10b94be19dc75729a1cac053a1cb9c59aa4f191b74ed95b1d1b" Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.212110 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac9e6142f75de10b94be19dc75729a1cac053a1cb9c59aa4f191b74ed95b1d1b"} err="failed to get container status \"ac9e6142f75de10b94be19dc75729a1cac053a1cb9c59aa4f191b74ed95b1d1b\": rpc error: code = NotFound desc = could not find container \"ac9e6142f75de10b94be19dc75729a1cac053a1cb9c59aa4f191b74ed95b1d1b\": container with ID starting with ac9e6142f75de10b94be19dc75729a1cac053a1cb9c59aa4f191b74ed95b1d1b not found: ID does not exist" Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.212196 4747 scope.go:117] "RemoveContainer" containerID="85d812e3d1f7bc97ecfe41c52a65a3a2b62bfb65ac6b2bd6e8322b67c7dac138" Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.215620 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5tm7d"] Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.223176 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5tm7d"] Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.249880 4747 scope.go:117] "RemoveContainer" containerID="b4ca73ebb81e4e12a5d93ef322091d87b921b8f98024f11ff81cbd6aca39bf5c" Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.284826 4747 scope.go:117] "RemoveContainer" containerID="a8955a242e19efdbabedafcfb8de22160a828e444db40429a0dce2745a98fff3" Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.358167 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d95b78fc9-pqwbl" Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.370007 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-84m7w"] Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.372092 4747 scope.go:117] "RemoveContainer" containerID="85d812e3d1f7bc97ecfe41c52a65a3a2b62bfb65ac6b2bd6e8322b67c7dac138" Dec 15 05:52:52 crc kubenswrapper[4747]: E1215 05:52:52.372783 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85d812e3d1f7bc97ecfe41c52a65a3a2b62bfb65ac6b2bd6e8322b67c7dac138\": container with ID starting with 85d812e3d1f7bc97ecfe41c52a65a3a2b62bfb65ac6b2bd6e8322b67c7dac138 not found: ID does not exist" containerID="85d812e3d1f7bc97ecfe41c52a65a3a2b62bfb65ac6b2bd6e8322b67c7dac138" Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.372824 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85d812e3d1f7bc97ecfe41c52a65a3a2b62bfb65ac6b2bd6e8322b67c7dac138"} err="failed to get container status \"85d812e3d1f7bc97ecfe41c52a65a3a2b62bfb65ac6b2bd6e8322b67c7dac138\": rpc error: code = NotFound desc = could not find container \"85d812e3d1f7bc97ecfe41c52a65a3a2b62bfb65ac6b2bd6e8322b67c7dac138\": container with ID starting with 85d812e3d1f7bc97ecfe41c52a65a3a2b62bfb65ac6b2bd6e8322b67c7dac138 not found: ID does not exist" Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.372850 4747 scope.go:117] "RemoveContainer" containerID="b4ca73ebb81e4e12a5d93ef322091d87b921b8f98024f11ff81cbd6aca39bf5c" Dec 15 05:52:52 crc kubenswrapper[4747]: E1215 05:52:52.373153 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4ca73ebb81e4e12a5d93ef322091d87b921b8f98024f11ff81cbd6aca39bf5c\": container with ID starting with b4ca73ebb81e4e12a5d93ef322091d87b921b8f98024f11ff81cbd6aca39bf5c not found: ID does not exist" containerID="b4ca73ebb81e4e12a5d93ef322091d87b921b8f98024f11ff81cbd6aca39bf5c" Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.373191 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4ca73ebb81e4e12a5d93ef322091d87b921b8f98024f11ff81cbd6aca39bf5c"} err="failed to get container status \"b4ca73ebb81e4e12a5d93ef322091d87b921b8f98024f11ff81cbd6aca39bf5c\": rpc error: code = NotFound desc = could not find container \"b4ca73ebb81e4e12a5d93ef322091d87b921b8f98024f11ff81cbd6aca39bf5c\": container with ID starting with b4ca73ebb81e4e12a5d93ef322091d87b921b8f98024f11ff81cbd6aca39bf5c not found: ID does not exist" Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.373216 4747 scope.go:117] "RemoveContainer" containerID="a8955a242e19efdbabedafcfb8de22160a828e444db40429a0dce2745a98fff3" Dec 15 05:52:52 crc kubenswrapper[4747]: E1215 05:52:52.373537 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8955a242e19efdbabedafcfb8de22160a828e444db40429a0dce2745a98fff3\": container with ID starting with a8955a242e19efdbabedafcfb8de22160a828e444db40429a0dce2745a98fff3 not found: ID does not exist" containerID="a8955a242e19efdbabedafcfb8de22160a828e444db40429a0dce2745a98fff3" Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.373557 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8955a242e19efdbabedafcfb8de22160a828e444db40429a0dce2745a98fff3"} err="failed to get container status \"a8955a242e19efdbabedafcfb8de22160a828e444db40429a0dce2745a98fff3\": rpc error: code = NotFound desc = could not find container \"a8955a242e19efdbabedafcfb8de22160a828e444db40429a0dce2745a98fff3\": container with ID starting with a8955a242e19efdbabedafcfb8de22160a828e444db40429a0dce2745a98fff3 not found: ID does not exist" Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.384959 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-84m7w"] Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.423495 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-79fcd98c9d-ccgjm" Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.452742 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a57fd65d-825d-4fd1-a5e6-ab244093e1b8-ovsdbserver-sb\") pod \"a57fd65d-825d-4fd1-a5e6-ab244093e1b8\" (UID: \"a57fd65d-825d-4fd1-a5e6-ab244093e1b8\") " Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.452812 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a57fd65d-825d-4fd1-a5e6-ab244093e1b8-dns-swift-storage-0\") pod \"a57fd65d-825d-4fd1-a5e6-ab244093e1b8\" (UID: \"a57fd65d-825d-4fd1-a5e6-ab244093e1b8\") " Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.452886 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a57fd65d-825d-4fd1-a5e6-ab244093e1b8-dns-svc\") pod \"a57fd65d-825d-4fd1-a5e6-ab244093e1b8\" (UID: \"a57fd65d-825d-4fd1-a5e6-ab244093e1b8\") " Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.452917 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a57fd65d-825d-4fd1-a5e6-ab244093e1b8-ovsdbserver-nb\") pod \"a57fd65d-825d-4fd1-a5e6-ab244093e1b8\" (UID: \"a57fd65d-825d-4fd1-a5e6-ab244093e1b8\") " Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.452950 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cg6pb\" (UniqueName: \"kubernetes.io/projected/a57fd65d-825d-4fd1-a5e6-ab244093e1b8-kube-api-access-cg6pb\") pod \"a57fd65d-825d-4fd1-a5e6-ab244093e1b8\" (UID: \"a57fd65d-825d-4fd1-a5e6-ab244093e1b8\") " Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.453100 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a57fd65d-825d-4fd1-a5e6-ab244093e1b8-config\") pod \"a57fd65d-825d-4fd1-a5e6-ab244093e1b8\" (UID: \"a57fd65d-825d-4fd1-a5e6-ab244093e1b8\") " Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.477035 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a57fd65d-825d-4fd1-a5e6-ab244093e1b8-kube-api-access-cg6pb" (OuterVolumeSpecName: "kube-api-access-cg6pb") pod "a57fd65d-825d-4fd1-a5e6-ab244093e1b8" (UID: "a57fd65d-825d-4fd1-a5e6-ab244093e1b8"). InnerVolumeSpecName "kube-api-access-cg6pb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.497043 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a57fd65d-825d-4fd1-a5e6-ab244093e1b8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a57fd65d-825d-4fd1-a5e6-ab244093e1b8" (UID: "a57fd65d-825d-4fd1-a5e6-ab244093e1b8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.501251 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a57fd65d-825d-4fd1-a5e6-ab244093e1b8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a57fd65d-825d-4fd1-a5e6-ab244093e1b8" (UID: "a57fd65d-825d-4fd1-a5e6-ab244093e1b8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.515578 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a57fd65d-825d-4fd1-a5e6-ab244093e1b8-config" (OuterVolumeSpecName: "config") pod "a57fd65d-825d-4fd1-a5e6-ab244093e1b8" (UID: "a57fd65d-825d-4fd1-a5e6-ab244093e1b8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.518152 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a57fd65d-825d-4fd1-a5e6-ab244093e1b8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a57fd65d-825d-4fd1-a5e6-ab244093e1b8" (UID: "a57fd65d-825d-4fd1-a5e6-ab244093e1b8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.523626 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a57fd65d-825d-4fd1-a5e6-ab244093e1b8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a57fd65d-825d-4fd1-a5e6-ab244093e1b8" (UID: "a57fd65d-825d-4fd1-a5e6-ab244093e1b8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.560269 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a57fd65d-825d-4fd1-a5e6-ab244093e1b8-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.560296 4747 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a57fd65d-825d-4fd1-a5e6-ab244093e1b8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.560308 4747 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a57fd65d-825d-4fd1-a5e6-ab244093e1b8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.560328 4747 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a57fd65d-825d-4fd1-a5e6-ab244093e1b8-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.560336 4747 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a57fd65d-825d-4fd1-a5e6-ab244093e1b8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.560345 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cg6pb\" (UniqueName: \"kubernetes.io/projected/a57fd65d-825d-4fd1-a5e6-ab244093e1b8-kube-api-access-cg6pb\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.640584 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7aa44578-a974-4c1f-90db-014ecf544678" path="/var/lib/kubelet/pods/7aa44578-a974-4c1f-90db-014ecf544678/volumes" Dec 15 05:52:52 crc kubenswrapper[4747]: I1215 05:52:52.641623 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8515fd4-ce21-4f89-a703-e3807fa6fd90" path="/var/lib/kubelet/pods/e8515fd4-ce21-4f89-a703-e3807fa6fd90/volumes" Dec 15 05:52:53 crc kubenswrapper[4747]: I1215 05:52:53.177057 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d95b78fc9-pqwbl" event={"ID":"a57fd65d-825d-4fd1-a5e6-ab244093e1b8","Type":"ContainerDied","Data":"f86724a4d7de012ad911f0cb75c54e45a797271517def5c52103c27d8d0eab68"} Dec 15 05:52:53 crc kubenswrapper[4747]: I1215 05:52:53.177110 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d95b78fc9-pqwbl" Dec 15 05:52:53 crc kubenswrapper[4747]: I1215 05:52:53.177129 4747 scope.go:117] "RemoveContainer" containerID="e881a4c63975aef39ce50f2a5c86af3367eef09fecf88f411f006760b4acaf8d" Dec 15 05:52:53 crc kubenswrapper[4747]: I1215 05:52:53.182799 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6f4bb4686b-tlv2x" Dec 15 05:52:53 crc kubenswrapper[4747]: I1215 05:52:53.199009 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d95b78fc9-pqwbl"] Dec 15 05:52:53 crc kubenswrapper[4747]: I1215 05:52:53.199617 4747 scope.go:117] "RemoveContainer" containerID="99d4532c1418e0da8750fbd98f9376279802090c90582a076f704e0a9140f38c" Dec 15 05:52:53 crc kubenswrapper[4747]: I1215 05:52:53.206451 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d95b78fc9-pqwbl"] Dec 15 05:52:53 crc kubenswrapper[4747]: I1215 05:52:53.356339 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6f4bb4686b-tlv2x" Dec 15 05:52:53 crc kubenswrapper[4747]: I1215 05:52:53.649642 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7b9f9565dc-vlcmk" Dec 15 05:52:53 crc kubenswrapper[4747]: I1215 05:52:53.700265 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-77755588cd-rgjzw"] Dec 15 05:52:53 crc kubenswrapper[4747]: I1215 05:52:53.700484 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-77755588cd-rgjzw" podUID="b6cfb859-aec3-41c6-bb59-7e84b23396bd" containerName="neutron-api" containerID="cri-o://41a17a55928b61d3480f0803e540c0ff36e7018c2c92f3f46c56a1c54f86dd4d" gracePeriod=30 Dec 15 05:52:53 crc kubenswrapper[4747]: I1215 05:52:53.700639 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-77755588cd-rgjzw" podUID="b6cfb859-aec3-41c6-bb59-7e84b23396bd" containerName="neutron-httpd" containerID="cri-o://1301cbc440a4c64083f43926b1093ad190605f737fcb696187c0c22422a5854f" gracePeriod=30 Dec 15 05:52:53 crc kubenswrapper[4747]: I1215 05:52:53.993919 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.096588 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/873188a0-9dbb-4c95-b39e-cd503e07e59f-log-httpd\") pod \"873188a0-9dbb-4c95-b39e-cd503e07e59f\" (UID: \"873188a0-9dbb-4c95-b39e-cd503e07e59f\") " Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.096654 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/873188a0-9dbb-4c95-b39e-cd503e07e59f-combined-ca-bundle\") pod \"873188a0-9dbb-4c95-b39e-cd503e07e59f\" (UID: \"873188a0-9dbb-4c95-b39e-cd503e07e59f\") " Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.096894 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/873188a0-9dbb-4c95-b39e-cd503e07e59f-run-httpd\") pod \"873188a0-9dbb-4c95-b39e-cd503e07e59f\" (UID: \"873188a0-9dbb-4c95-b39e-cd503e07e59f\") " Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.096958 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9ktk\" (UniqueName: \"kubernetes.io/projected/873188a0-9dbb-4c95-b39e-cd503e07e59f-kube-api-access-b9ktk\") pod \"873188a0-9dbb-4c95-b39e-cd503e07e59f\" (UID: \"873188a0-9dbb-4c95-b39e-cd503e07e59f\") " Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.097247 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/873188a0-9dbb-4c95-b39e-cd503e07e59f-config-data\") pod \"873188a0-9dbb-4c95-b39e-cd503e07e59f\" (UID: \"873188a0-9dbb-4c95-b39e-cd503e07e59f\") " Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.097401 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/873188a0-9dbb-4c95-b39e-cd503e07e59f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "873188a0-9dbb-4c95-b39e-cd503e07e59f" (UID: "873188a0-9dbb-4c95-b39e-cd503e07e59f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.097442 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/873188a0-9dbb-4c95-b39e-cd503e07e59f-scripts\") pod \"873188a0-9dbb-4c95-b39e-cd503e07e59f\" (UID: \"873188a0-9dbb-4c95-b39e-cd503e07e59f\") " Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.097427 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/873188a0-9dbb-4c95-b39e-cd503e07e59f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "873188a0-9dbb-4c95-b39e-cd503e07e59f" (UID: "873188a0-9dbb-4c95-b39e-cd503e07e59f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.097481 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/873188a0-9dbb-4c95-b39e-cd503e07e59f-sg-core-conf-yaml\") pod \"873188a0-9dbb-4c95-b39e-cd503e07e59f\" (UID: \"873188a0-9dbb-4c95-b39e-cd503e07e59f\") " Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.098911 4747 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/873188a0-9dbb-4c95-b39e-cd503e07e59f-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.098948 4747 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/873188a0-9dbb-4c95-b39e-cd503e07e59f-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.117747 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/873188a0-9dbb-4c95-b39e-cd503e07e59f-kube-api-access-b9ktk" (OuterVolumeSpecName: "kube-api-access-b9ktk") pod "873188a0-9dbb-4c95-b39e-cd503e07e59f" (UID: "873188a0-9dbb-4c95-b39e-cd503e07e59f"). InnerVolumeSpecName "kube-api-access-b9ktk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.118215 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/873188a0-9dbb-4c95-b39e-cd503e07e59f-scripts" (OuterVolumeSpecName: "scripts") pod "873188a0-9dbb-4c95-b39e-cd503e07e59f" (UID: "873188a0-9dbb-4c95-b39e-cd503e07e59f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.124579 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/873188a0-9dbb-4c95-b39e-cd503e07e59f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "873188a0-9dbb-4c95-b39e-cd503e07e59f" (UID: "873188a0-9dbb-4c95-b39e-cd503e07e59f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.171510 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/873188a0-9dbb-4c95-b39e-cd503e07e59f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "873188a0-9dbb-4c95-b39e-cd503e07e59f" (UID: "873188a0-9dbb-4c95-b39e-cd503e07e59f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.197870 4747 generic.go:334] "Generic (PLEG): container finished" podID="873188a0-9dbb-4c95-b39e-cd503e07e59f" containerID="c70b4cdb307a145485d475fb474f8b91fb623356651dc61ade264fe0001cb4f8" exitCode=0 Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.197954 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"873188a0-9dbb-4c95-b39e-cd503e07e59f","Type":"ContainerDied","Data":"c70b4cdb307a145485d475fb474f8b91fb623356651dc61ade264fe0001cb4f8"} Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.197989 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"873188a0-9dbb-4c95-b39e-cd503e07e59f","Type":"ContainerDied","Data":"8ed9150ac3e057462166cf419eefa52777e70251bf023b66f8d3d7778dc03a50"} Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.198009 4747 scope.go:117] "RemoveContainer" containerID="14cd76688175940d3ddf150fc4350378f3e639ba41947551102599e4dffb899f" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.198115 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.200411 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9ktk\" (UniqueName: \"kubernetes.io/projected/873188a0-9dbb-4c95-b39e-cd503e07e59f-kube-api-access-b9ktk\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.200427 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/873188a0-9dbb-4c95-b39e-cd503e07e59f-scripts\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.200459 4747 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/873188a0-9dbb-4c95-b39e-cd503e07e59f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.200469 4747 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/873188a0-9dbb-4c95-b39e-cd503e07e59f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.201434 4747 generic.go:334] "Generic (PLEG): container finished" podID="b6cfb859-aec3-41c6-bb59-7e84b23396bd" containerID="1301cbc440a4c64083f43926b1093ad190605f737fcb696187c0c22422a5854f" exitCode=0 Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.201497 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77755588cd-rgjzw" event={"ID":"b6cfb859-aec3-41c6-bb59-7e84b23396bd","Type":"ContainerDied","Data":"1301cbc440a4c64083f43926b1093ad190605f737fcb696187c0c22422a5854f"} Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.204041 4747 generic.go:334] "Generic (PLEG): container finished" podID="2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6" containerID="f248b70f5c2337578dfbc201e5e7e8dc36eef21d39d5ee2844f954cf6807cf8b" exitCode=0 Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.204104 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4hlv6" event={"ID":"2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6","Type":"ContainerDied","Data":"f248b70f5c2337578dfbc201e5e7e8dc36eef21d39d5ee2844f954cf6807cf8b"} Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.209730 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/873188a0-9dbb-4c95-b39e-cd503e07e59f-config-data" (OuterVolumeSpecName: "config-data") pod "873188a0-9dbb-4c95-b39e-cd503e07e59f" (UID: "873188a0-9dbb-4c95-b39e-cd503e07e59f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.271436 4747 scope.go:117] "RemoveContainer" containerID="f4e96925bcfa06c1f0423d51300c6d70ba8b0eaceb34083e2095061f09f56e24" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.296300 4747 scope.go:117] "RemoveContainer" containerID="c70b4cdb307a145485d475fb474f8b91fb623356651dc61ade264fe0001cb4f8" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.303164 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/873188a0-9dbb-4c95-b39e-cd503e07e59f-config-data\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.320753 4747 scope.go:117] "RemoveContainer" containerID="8dc5ef3d40e3025d26d6edb7c1e9ee191c80cb1f496fb2b91f4aed5be6317712" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.350062 4747 scope.go:117] "RemoveContainer" containerID="14cd76688175940d3ddf150fc4350378f3e639ba41947551102599e4dffb899f" Dec 15 05:52:54 crc kubenswrapper[4747]: E1215 05:52:54.350599 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14cd76688175940d3ddf150fc4350378f3e639ba41947551102599e4dffb899f\": container with ID starting with 14cd76688175940d3ddf150fc4350378f3e639ba41947551102599e4dffb899f not found: ID does not exist" containerID="14cd76688175940d3ddf150fc4350378f3e639ba41947551102599e4dffb899f" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.350641 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14cd76688175940d3ddf150fc4350378f3e639ba41947551102599e4dffb899f"} err="failed to get container status \"14cd76688175940d3ddf150fc4350378f3e639ba41947551102599e4dffb899f\": rpc error: code = NotFound desc = could not find container \"14cd76688175940d3ddf150fc4350378f3e639ba41947551102599e4dffb899f\": container with ID starting with 14cd76688175940d3ddf150fc4350378f3e639ba41947551102599e4dffb899f not found: ID does not exist" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.350672 4747 scope.go:117] "RemoveContainer" containerID="f4e96925bcfa06c1f0423d51300c6d70ba8b0eaceb34083e2095061f09f56e24" Dec 15 05:52:54 crc kubenswrapper[4747]: E1215 05:52:54.354070 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4e96925bcfa06c1f0423d51300c6d70ba8b0eaceb34083e2095061f09f56e24\": container with ID starting with f4e96925bcfa06c1f0423d51300c6d70ba8b0eaceb34083e2095061f09f56e24 not found: ID does not exist" containerID="f4e96925bcfa06c1f0423d51300c6d70ba8b0eaceb34083e2095061f09f56e24" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.354098 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4e96925bcfa06c1f0423d51300c6d70ba8b0eaceb34083e2095061f09f56e24"} err="failed to get container status \"f4e96925bcfa06c1f0423d51300c6d70ba8b0eaceb34083e2095061f09f56e24\": rpc error: code = NotFound desc = could not find container \"f4e96925bcfa06c1f0423d51300c6d70ba8b0eaceb34083e2095061f09f56e24\": container with ID starting with f4e96925bcfa06c1f0423d51300c6d70ba8b0eaceb34083e2095061f09f56e24 not found: ID does not exist" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.354114 4747 scope.go:117] "RemoveContainer" containerID="c70b4cdb307a145485d475fb474f8b91fb623356651dc61ade264fe0001cb4f8" Dec 15 05:52:54 crc kubenswrapper[4747]: E1215 05:52:54.354612 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c70b4cdb307a145485d475fb474f8b91fb623356651dc61ade264fe0001cb4f8\": container with ID starting with c70b4cdb307a145485d475fb474f8b91fb623356651dc61ade264fe0001cb4f8 not found: ID does not exist" containerID="c70b4cdb307a145485d475fb474f8b91fb623356651dc61ade264fe0001cb4f8" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.354648 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c70b4cdb307a145485d475fb474f8b91fb623356651dc61ade264fe0001cb4f8"} err="failed to get container status \"c70b4cdb307a145485d475fb474f8b91fb623356651dc61ade264fe0001cb4f8\": rpc error: code = NotFound desc = could not find container \"c70b4cdb307a145485d475fb474f8b91fb623356651dc61ade264fe0001cb4f8\": container with ID starting with c70b4cdb307a145485d475fb474f8b91fb623356651dc61ade264fe0001cb4f8 not found: ID does not exist" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.354674 4747 scope.go:117] "RemoveContainer" containerID="8dc5ef3d40e3025d26d6edb7c1e9ee191c80cb1f496fb2b91f4aed5be6317712" Dec 15 05:52:54 crc kubenswrapper[4747]: E1215 05:52:54.355148 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8dc5ef3d40e3025d26d6edb7c1e9ee191c80cb1f496fb2b91f4aed5be6317712\": container with ID starting with 8dc5ef3d40e3025d26d6edb7c1e9ee191c80cb1f496fb2b91f4aed5be6317712 not found: ID does not exist" containerID="8dc5ef3d40e3025d26d6edb7c1e9ee191c80cb1f496fb2b91f4aed5be6317712" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.355197 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dc5ef3d40e3025d26d6edb7c1e9ee191c80cb1f496fb2b91f4aed5be6317712"} err="failed to get container status \"8dc5ef3d40e3025d26d6edb7c1e9ee191c80cb1f496fb2b91f4aed5be6317712\": rpc error: code = NotFound desc = could not find container \"8dc5ef3d40e3025d26d6edb7c1e9ee191c80cb1f496fb2b91f4aed5be6317712\": container with ID starting with 8dc5ef3d40e3025d26d6edb7c1e9ee191c80cb1f496fb2b91f4aed5be6317712 not found: ID does not exist" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.528705 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.537525 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.560956 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 15 05:52:54 crc kubenswrapper[4747]: E1215 05:52:54.561744 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="656ef4f9-8e82-43ce-b0f9-b654bcecb12a" containerName="extract-utilities" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.561779 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="656ef4f9-8e82-43ce-b0f9-b654bcecb12a" containerName="extract-utilities" Dec 15 05:52:54 crc kubenswrapper[4747]: E1215 05:52:54.561804 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8515fd4-ce21-4f89-a703-e3807fa6fd90" containerName="extract-content" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.561814 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8515fd4-ce21-4f89-a703-e3807fa6fd90" containerName="extract-content" Dec 15 05:52:54 crc kubenswrapper[4747]: E1215 05:52:54.561827 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="656ef4f9-8e82-43ce-b0f9-b654bcecb12a" containerName="registry-server" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.561835 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="656ef4f9-8e82-43ce-b0f9-b654bcecb12a" containerName="registry-server" Dec 15 05:52:54 crc kubenswrapper[4747]: E1215 05:52:54.561855 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8515fd4-ce21-4f89-a703-e3807fa6fd90" containerName="registry-server" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.561862 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8515fd4-ce21-4f89-a703-e3807fa6fd90" containerName="registry-server" Dec 15 05:52:54 crc kubenswrapper[4747]: E1215 05:52:54.561874 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8515fd4-ce21-4f89-a703-e3807fa6fd90" containerName="extract-utilities" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.561882 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8515fd4-ce21-4f89-a703-e3807fa6fd90" containerName="extract-utilities" Dec 15 05:52:54 crc kubenswrapper[4747]: E1215 05:52:54.561896 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="873188a0-9dbb-4c95-b39e-cd503e07e59f" containerName="ceilometer-notification-agent" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.561905 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="873188a0-9dbb-4c95-b39e-cd503e07e59f" containerName="ceilometer-notification-agent" Dec 15 05:52:54 crc kubenswrapper[4747]: E1215 05:52:54.561914 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="873188a0-9dbb-4c95-b39e-cd503e07e59f" containerName="sg-core" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.561921 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="873188a0-9dbb-4c95-b39e-cd503e07e59f" containerName="sg-core" Dec 15 05:52:54 crc kubenswrapper[4747]: E1215 05:52:54.561953 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a57fd65d-825d-4fd1-a5e6-ab244093e1b8" containerName="dnsmasq-dns" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.561961 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="a57fd65d-825d-4fd1-a5e6-ab244093e1b8" containerName="dnsmasq-dns" Dec 15 05:52:54 crc kubenswrapper[4747]: E1215 05:52:54.561979 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="873188a0-9dbb-4c95-b39e-cd503e07e59f" containerName="ceilometer-central-agent" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.561988 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="873188a0-9dbb-4c95-b39e-cd503e07e59f" containerName="ceilometer-central-agent" Dec 15 05:52:54 crc kubenswrapper[4747]: E1215 05:52:54.561994 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="656ef4f9-8e82-43ce-b0f9-b654bcecb12a" containerName="extract-content" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.562004 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="656ef4f9-8e82-43ce-b0f9-b654bcecb12a" containerName="extract-content" Dec 15 05:52:54 crc kubenswrapper[4747]: E1215 05:52:54.562016 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aa44578-a974-4c1f-90db-014ecf544678" containerName="registry-server" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.562028 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aa44578-a974-4c1f-90db-014ecf544678" containerName="registry-server" Dec 15 05:52:54 crc kubenswrapper[4747]: E1215 05:52:54.562038 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="873188a0-9dbb-4c95-b39e-cd503e07e59f" containerName="proxy-httpd" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.562050 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="873188a0-9dbb-4c95-b39e-cd503e07e59f" containerName="proxy-httpd" Dec 15 05:52:54 crc kubenswrapper[4747]: E1215 05:52:54.562066 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a57fd65d-825d-4fd1-a5e6-ab244093e1b8" containerName="init" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.562073 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="a57fd65d-825d-4fd1-a5e6-ab244093e1b8" containerName="init" Dec 15 05:52:54 crc kubenswrapper[4747]: E1215 05:52:54.562084 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aa44578-a974-4c1f-90db-014ecf544678" containerName="extract-content" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.562091 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aa44578-a974-4c1f-90db-014ecf544678" containerName="extract-content" Dec 15 05:52:54 crc kubenswrapper[4747]: E1215 05:52:54.562102 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aa44578-a974-4c1f-90db-014ecf544678" containerName="extract-utilities" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.562110 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aa44578-a974-4c1f-90db-014ecf544678" containerName="extract-utilities" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.562424 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="7aa44578-a974-4c1f-90db-014ecf544678" containerName="registry-server" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.562445 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="873188a0-9dbb-4c95-b39e-cd503e07e59f" containerName="ceilometer-notification-agent" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.562481 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="873188a0-9dbb-4c95-b39e-cd503e07e59f" containerName="ceilometer-central-agent" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.562493 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="873188a0-9dbb-4c95-b39e-cd503e07e59f" containerName="sg-core" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.562501 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8515fd4-ce21-4f89-a703-e3807fa6fd90" containerName="registry-server" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.562514 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="873188a0-9dbb-4c95-b39e-cd503e07e59f" containerName="proxy-httpd" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.562529 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="a57fd65d-825d-4fd1-a5e6-ab244093e1b8" containerName="dnsmasq-dns" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.562553 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="656ef4f9-8e82-43ce-b0f9-b654bcecb12a" containerName="registry-server" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.567309 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.569564 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.570311 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.578960 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.609835 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdd70cf4-833d-4f42-b330-18deb7418bb2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bdd70cf4-833d-4f42-b330-18deb7418bb2\") " pod="openstack/ceilometer-0" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.609920 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdd70cf4-833d-4f42-b330-18deb7418bb2-scripts\") pod \"ceilometer-0\" (UID: \"bdd70cf4-833d-4f42-b330-18deb7418bb2\") " pod="openstack/ceilometer-0" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.610024 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bdd70cf4-833d-4f42-b330-18deb7418bb2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bdd70cf4-833d-4f42-b330-18deb7418bb2\") " pod="openstack/ceilometer-0" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.610078 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmv5q\" (UniqueName: \"kubernetes.io/projected/bdd70cf4-833d-4f42-b330-18deb7418bb2-kube-api-access-qmv5q\") pod \"ceilometer-0\" (UID: \"bdd70cf4-833d-4f42-b330-18deb7418bb2\") " pod="openstack/ceilometer-0" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.610122 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bdd70cf4-833d-4f42-b330-18deb7418bb2-run-httpd\") pod \"ceilometer-0\" (UID: \"bdd70cf4-833d-4f42-b330-18deb7418bb2\") " pod="openstack/ceilometer-0" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.610196 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdd70cf4-833d-4f42-b330-18deb7418bb2-config-data\") pod \"ceilometer-0\" (UID: \"bdd70cf4-833d-4f42-b330-18deb7418bb2\") " pod="openstack/ceilometer-0" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.610216 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bdd70cf4-833d-4f42-b330-18deb7418bb2-log-httpd\") pod \"ceilometer-0\" (UID: \"bdd70cf4-833d-4f42-b330-18deb7418bb2\") " pod="openstack/ceilometer-0" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.638168 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="873188a0-9dbb-4c95-b39e-cd503e07e59f" path="/var/lib/kubelet/pods/873188a0-9dbb-4c95-b39e-cd503e07e59f/volumes" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.638946 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a57fd65d-825d-4fd1-a5e6-ab244093e1b8" path="/var/lib/kubelet/pods/a57fd65d-825d-4fd1-a5e6-ab244093e1b8/volumes" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.711118 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmv5q\" (UniqueName: \"kubernetes.io/projected/bdd70cf4-833d-4f42-b330-18deb7418bb2-kube-api-access-qmv5q\") pod \"ceilometer-0\" (UID: \"bdd70cf4-833d-4f42-b330-18deb7418bb2\") " pod="openstack/ceilometer-0" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.711260 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bdd70cf4-833d-4f42-b330-18deb7418bb2-run-httpd\") pod \"ceilometer-0\" (UID: \"bdd70cf4-833d-4f42-b330-18deb7418bb2\") " pod="openstack/ceilometer-0" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.711392 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdd70cf4-833d-4f42-b330-18deb7418bb2-config-data\") pod \"ceilometer-0\" (UID: \"bdd70cf4-833d-4f42-b330-18deb7418bb2\") " pod="openstack/ceilometer-0" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.711497 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bdd70cf4-833d-4f42-b330-18deb7418bb2-log-httpd\") pod \"ceilometer-0\" (UID: \"bdd70cf4-833d-4f42-b330-18deb7418bb2\") " pod="openstack/ceilometer-0" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.711650 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdd70cf4-833d-4f42-b330-18deb7418bb2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bdd70cf4-833d-4f42-b330-18deb7418bb2\") " pod="openstack/ceilometer-0" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.711730 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdd70cf4-833d-4f42-b330-18deb7418bb2-scripts\") pod \"ceilometer-0\" (UID: \"bdd70cf4-833d-4f42-b330-18deb7418bb2\") " pod="openstack/ceilometer-0" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.711864 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bdd70cf4-833d-4f42-b330-18deb7418bb2-run-httpd\") pod \"ceilometer-0\" (UID: \"bdd70cf4-833d-4f42-b330-18deb7418bb2\") " pod="openstack/ceilometer-0" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.711882 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bdd70cf4-833d-4f42-b330-18deb7418bb2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bdd70cf4-833d-4f42-b330-18deb7418bb2\") " pod="openstack/ceilometer-0" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.712844 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bdd70cf4-833d-4f42-b330-18deb7418bb2-log-httpd\") pod \"ceilometer-0\" (UID: \"bdd70cf4-833d-4f42-b330-18deb7418bb2\") " pod="openstack/ceilometer-0" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.716907 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdd70cf4-833d-4f42-b330-18deb7418bb2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bdd70cf4-833d-4f42-b330-18deb7418bb2\") " pod="openstack/ceilometer-0" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.717427 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdd70cf4-833d-4f42-b330-18deb7418bb2-scripts\") pod \"ceilometer-0\" (UID: \"bdd70cf4-833d-4f42-b330-18deb7418bb2\") " pod="openstack/ceilometer-0" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.717664 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdd70cf4-833d-4f42-b330-18deb7418bb2-config-data\") pod \"ceilometer-0\" (UID: \"bdd70cf4-833d-4f42-b330-18deb7418bb2\") " pod="openstack/ceilometer-0" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.718542 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bdd70cf4-833d-4f42-b330-18deb7418bb2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bdd70cf4-833d-4f42-b330-18deb7418bb2\") " pod="openstack/ceilometer-0" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.727002 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmv5q\" (UniqueName: \"kubernetes.io/projected/bdd70cf4-833d-4f42-b330-18deb7418bb2-kube-api-access-qmv5q\") pod \"ceilometer-0\" (UID: \"bdd70cf4-833d-4f42-b330-18deb7418bb2\") " pod="openstack/ceilometer-0" Dec 15 05:52:54 crc kubenswrapper[4747]: I1215 05:52:54.897856 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 15 05:52:55 crc kubenswrapper[4747]: I1215 05:52:55.314336 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 15 05:52:55 crc kubenswrapper[4747]: I1215 05:52:55.477304 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4hlv6" Dec 15 05:52:55 crc kubenswrapper[4747]: I1215 05:52:55.529176 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6-db-sync-config-data\") pod \"2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6\" (UID: \"2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6\") " Dec 15 05:52:55 crc kubenswrapper[4747]: I1215 05:52:55.529439 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6-scripts\") pod \"2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6\" (UID: \"2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6\") " Dec 15 05:52:55 crc kubenswrapper[4747]: I1215 05:52:55.529495 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6-config-data\") pod \"2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6\" (UID: \"2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6\") " Dec 15 05:52:55 crc kubenswrapper[4747]: I1215 05:52:55.529561 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6-combined-ca-bundle\") pod \"2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6\" (UID: \"2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6\") " Dec 15 05:52:55 crc kubenswrapper[4747]: I1215 05:52:55.529591 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ctpf\" (UniqueName: \"kubernetes.io/projected/2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6-kube-api-access-4ctpf\") pod \"2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6\" (UID: \"2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6\") " Dec 15 05:52:55 crc kubenswrapper[4747]: I1215 05:52:55.529676 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6-etc-machine-id\") pod \"2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6\" (UID: \"2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6\") " Dec 15 05:52:55 crc kubenswrapper[4747]: I1215 05:52:55.530714 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6" (UID: "2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 15 05:52:55 crc kubenswrapper[4747]: I1215 05:52:55.538091 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6-scripts" (OuterVolumeSpecName: "scripts") pod "2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6" (UID: "2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:52:55 crc kubenswrapper[4747]: I1215 05:52:55.541477 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6-kube-api-access-4ctpf" (OuterVolumeSpecName: "kube-api-access-4ctpf") pod "2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6" (UID: "2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6"). InnerVolumeSpecName "kube-api-access-4ctpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:52:55 crc kubenswrapper[4747]: I1215 05:52:55.541562 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6" (UID: "2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:52:55 crc kubenswrapper[4747]: I1215 05:52:55.558453 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6" (UID: "2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:52:55 crc kubenswrapper[4747]: I1215 05:52:55.572336 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6-config-data" (OuterVolumeSpecName: "config-data") pod "2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6" (UID: "2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:52:55 crc kubenswrapper[4747]: I1215 05:52:55.633053 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6-scripts\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:55 crc kubenswrapper[4747]: I1215 05:52:55.633089 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6-config-data\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:55 crc kubenswrapper[4747]: I1215 05:52:55.633102 4747 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:55 crc kubenswrapper[4747]: I1215 05:52:55.633118 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ctpf\" (UniqueName: \"kubernetes.io/projected/2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6-kube-api-access-4ctpf\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:55 crc kubenswrapper[4747]: I1215 05:52:55.633133 4747 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:55 crc kubenswrapper[4747]: I1215 05:52:55.633144 4747 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.233243 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4hlv6" event={"ID":"2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6","Type":"ContainerDied","Data":"4e51ee7677e31866c0a28e0c3f7b91de0b057a1dea64c7b1e3a5f77ef13de453"} Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.233716 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e51ee7677e31866c0a28e0c3f7b91de0b057a1dea64c7b1e3a5f77ef13de453" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.233718 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4hlv6" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.236408 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bdd70cf4-833d-4f42-b330-18deb7418bb2","Type":"ContainerStarted","Data":"daa0ceba3762d60dee2c4ebe4af6ab63fae0d85ea55ae59ebaac0f4626051e66"} Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.236461 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bdd70cf4-833d-4f42-b330-18deb7418bb2","Type":"ContainerStarted","Data":"ba9bace7ba6c981ebe08a0251f019e3057451985e4b451a5afa78a14b3b1abdc"} Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.519682 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 15 05:52:56 crc kubenswrapper[4747]: E1215 05:52:56.520055 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6" containerName="cinder-db-sync" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.520070 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6" containerName="cinder-db-sync" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.520248 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6" containerName="cinder-db-sync" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.521132 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.530562 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.531231 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.531280 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-njzm6" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.531491 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.550099 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-695946c66c-cs66k"] Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.551872 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-695946c66c-cs66k" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.560338 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-695946c66c-cs66k"] Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.566471 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.658269 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a5a72256-21cc-42e2-bdfb-b1d372846404-ovsdbserver-nb\") pod \"dnsmasq-dns-695946c66c-cs66k\" (UID: \"a5a72256-21cc-42e2-bdfb-b1d372846404\") " pod="openstack/dnsmasq-dns-695946c66c-cs66k" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.658326 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a5a72256-21cc-42e2-bdfb-b1d372846404-dns-swift-storage-0\") pod \"dnsmasq-dns-695946c66c-cs66k\" (UID: \"a5a72256-21cc-42e2-bdfb-b1d372846404\") " pod="openstack/dnsmasq-dns-695946c66c-cs66k" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.658347 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a54f13da-aab9-4190-8ddd-2537836ce0d9-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a54f13da-aab9-4190-8ddd-2537836ce0d9\") " pod="openstack/cinder-scheduler-0" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.658366 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a54f13da-aab9-4190-8ddd-2537836ce0d9-config-data\") pod \"cinder-scheduler-0\" (UID: \"a54f13da-aab9-4190-8ddd-2537836ce0d9\") " pod="openstack/cinder-scheduler-0" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.658386 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a54f13da-aab9-4190-8ddd-2537836ce0d9-scripts\") pod \"cinder-scheduler-0\" (UID: \"a54f13da-aab9-4190-8ddd-2537836ce0d9\") " pod="openstack/cinder-scheduler-0" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.658848 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctbxc\" (UniqueName: \"kubernetes.io/projected/a54f13da-aab9-4190-8ddd-2537836ce0d9-kube-api-access-ctbxc\") pod \"cinder-scheduler-0\" (UID: \"a54f13da-aab9-4190-8ddd-2537836ce0d9\") " pod="openstack/cinder-scheduler-0" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.658890 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5a72256-21cc-42e2-bdfb-b1d372846404-config\") pod \"dnsmasq-dns-695946c66c-cs66k\" (UID: \"a5a72256-21cc-42e2-bdfb-b1d372846404\") " pod="openstack/dnsmasq-dns-695946c66c-cs66k" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.658967 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn2tm\" (UniqueName: \"kubernetes.io/projected/a5a72256-21cc-42e2-bdfb-b1d372846404-kube-api-access-zn2tm\") pod \"dnsmasq-dns-695946c66c-cs66k\" (UID: \"a5a72256-21cc-42e2-bdfb-b1d372846404\") " pod="openstack/dnsmasq-dns-695946c66c-cs66k" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.659003 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a54f13da-aab9-4190-8ddd-2537836ce0d9-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a54f13da-aab9-4190-8ddd-2537836ce0d9\") " pod="openstack/cinder-scheduler-0" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.659040 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5a72256-21cc-42e2-bdfb-b1d372846404-dns-svc\") pod \"dnsmasq-dns-695946c66c-cs66k\" (UID: \"a5a72256-21cc-42e2-bdfb-b1d372846404\") " pod="openstack/dnsmasq-dns-695946c66c-cs66k" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.659081 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a5a72256-21cc-42e2-bdfb-b1d372846404-ovsdbserver-sb\") pod \"dnsmasq-dns-695946c66c-cs66k\" (UID: \"a5a72256-21cc-42e2-bdfb-b1d372846404\") " pod="openstack/dnsmasq-dns-695946c66c-cs66k" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.659109 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a54f13da-aab9-4190-8ddd-2537836ce0d9-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a54f13da-aab9-4190-8ddd-2537836ce0d9\") " pod="openstack/cinder-scheduler-0" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.720473 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.722588 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.726223 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.735032 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.761454 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5a72256-21cc-42e2-bdfb-b1d372846404-dns-svc\") pod \"dnsmasq-dns-695946c66c-cs66k\" (UID: \"a5a72256-21cc-42e2-bdfb-b1d372846404\") " pod="openstack/dnsmasq-dns-695946c66c-cs66k" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.761587 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a5a72256-21cc-42e2-bdfb-b1d372846404-ovsdbserver-sb\") pod \"dnsmasq-dns-695946c66c-cs66k\" (UID: \"a5a72256-21cc-42e2-bdfb-b1d372846404\") " pod="openstack/dnsmasq-dns-695946c66c-cs66k" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.761645 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a54f13da-aab9-4190-8ddd-2537836ce0d9-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a54f13da-aab9-4190-8ddd-2537836ce0d9\") " pod="openstack/cinder-scheduler-0" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.761692 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a5a72256-21cc-42e2-bdfb-b1d372846404-ovsdbserver-nb\") pod \"dnsmasq-dns-695946c66c-cs66k\" (UID: \"a5a72256-21cc-42e2-bdfb-b1d372846404\") " pod="openstack/dnsmasq-dns-695946c66c-cs66k" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.761743 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a84d6faa-79e4-428d-b1e8-ac1d30d274aa-config-data\") pod \"cinder-api-0\" (UID: \"a84d6faa-79e4-428d-b1e8-ac1d30d274aa\") " pod="openstack/cinder-api-0" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.761766 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a84d6faa-79e4-428d-b1e8-ac1d30d274aa-logs\") pod \"cinder-api-0\" (UID: \"a84d6faa-79e4-428d-b1e8-ac1d30d274aa\") " pod="openstack/cinder-api-0" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.761791 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a5a72256-21cc-42e2-bdfb-b1d372846404-dns-swift-storage-0\") pod \"dnsmasq-dns-695946c66c-cs66k\" (UID: \"a5a72256-21cc-42e2-bdfb-b1d372846404\") " pod="openstack/dnsmasq-dns-695946c66c-cs66k" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.761802 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a54f13da-aab9-4190-8ddd-2537836ce0d9-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a54f13da-aab9-4190-8ddd-2537836ce0d9\") " pod="openstack/cinder-scheduler-0" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.761810 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a54f13da-aab9-4190-8ddd-2537836ce0d9-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a54f13da-aab9-4190-8ddd-2537836ce0d9\") " pod="openstack/cinder-scheduler-0" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.761881 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a54f13da-aab9-4190-8ddd-2537836ce0d9-config-data\") pod \"cinder-scheduler-0\" (UID: \"a54f13da-aab9-4190-8ddd-2537836ce0d9\") " pod="openstack/cinder-scheduler-0" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.761907 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a54f13da-aab9-4190-8ddd-2537836ce0d9-scripts\") pod \"cinder-scheduler-0\" (UID: \"a54f13da-aab9-4190-8ddd-2537836ce0d9\") " pod="openstack/cinder-scheduler-0" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.762035 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctbxc\" (UniqueName: \"kubernetes.io/projected/a54f13da-aab9-4190-8ddd-2537836ce0d9-kube-api-access-ctbxc\") pod \"cinder-scheduler-0\" (UID: \"a54f13da-aab9-4190-8ddd-2537836ce0d9\") " pod="openstack/cinder-scheduler-0" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.762079 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5a72256-21cc-42e2-bdfb-b1d372846404-config\") pod \"dnsmasq-dns-695946c66c-cs66k\" (UID: \"a5a72256-21cc-42e2-bdfb-b1d372846404\") " pod="openstack/dnsmasq-dns-695946c66c-cs66k" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.762103 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a84d6faa-79e4-428d-b1e8-ac1d30d274aa-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a84d6faa-79e4-428d-b1e8-ac1d30d274aa\") " pod="openstack/cinder-api-0" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.762122 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a84d6faa-79e4-428d-b1e8-ac1d30d274aa-scripts\") pod \"cinder-api-0\" (UID: \"a84d6faa-79e4-428d-b1e8-ac1d30d274aa\") " pod="openstack/cinder-api-0" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.762170 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a84d6faa-79e4-428d-b1e8-ac1d30d274aa-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a84d6faa-79e4-428d-b1e8-ac1d30d274aa\") " pod="openstack/cinder-api-0" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.762225 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn2tm\" (UniqueName: \"kubernetes.io/projected/a5a72256-21cc-42e2-bdfb-b1d372846404-kube-api-access-zn2tm\") pod \"dnsmasq-dns-695946c66c-cs66k\" (UID: \"a5a72256-21cc-42e2-bdfb-b1d372846404\") " pod="openstack/dnsmasq-dns-695946c66c-cs66k" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.762301 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a54f13da-aab9-4190-8ddd-2537836ce0d9-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a54f13da-aab9-4190-8ddd-2537836ce0d9\") " pod="openstack/cinder-scheduler-0" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.762347 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a84d6faa-79e4-428d-b1e8-ac1d30d274aa-config-data-custom\") pod \"cinder-api-0\" (UID: \"a84d6faa-79e4-428d-b1e8-ac1d30d274aa\") " pod="openstack/cinder-api-0" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.762396 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq8jp\" (UniqueName: \"kubernetes.io/projected/a84d6faa-79e4-428d-b1e8-ac1d30d274aa-kube-api-access-zq8jp\") pod \"cinder-api-0\" (UID: \"a84d6faa-79e4-428d-b1e8-ac1d30d274aa\") " pod="openstack/cinder-api-0" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.762490 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a5a72256-21cc-42e2-bdfb-b1d372846404-ovsdbserver-nb\") pod \"dnsmasq-dns-695946c66c-cs66k\" (UID: \"a5a72256-21cc-42e2-bdfb-b1d372846404\") " pod="openstack/dnsmasq-dns-695946c66c-cs66k" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.762546 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a5a72256-21cc-42e2-bdfb-b1d372846404-ovsdbserver-sb\") pod \"dnsmasq-dns-695946c66c-cs66k\" (UID: \"a5a72256-21cc-42e2-bdfb-b1d372846404\") " pod="openstack/dnsmasq-dns-695946c66c-cs66k" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.762491 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5a72256-21cc-42e2-bdfb-b1d372846404-dns-svc\") pod \"dnsmasq-dns-695946c66c-cs66k\" (UID: \"a5a72256-21cc-42e2-bdfb-b1d372846404\") " pod="openstack/dnsmasq-dns-695946c66c-cs66k" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.763276 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5a72256-21cc-42e2-bdfb-b1d372846404-config\") pod \"dnsmasq-dns-695946c66c-cs66k\" (UID: \"a5a72256-21cc-42e2-bdfb-b1d372846404\") " pod="openstack/dnsmasq-dns-695946c66c-cs66k" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.763659 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a5a72256-21cc-42e2-bdfb-b1d372846404-dns-swift-storage-0\") pod \"dnsmasq-dns-695946c66c-cs66k\" (UID: \"a5a72256-21cc-42e2-bdfb-b1d372846404\") " pod="openstack/dnsmasq-dns-695946c66c-cs66k" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.768092 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a54f13da-aab9-4190-8ddd-2537836ce0d9-scripts\") pod \"cinder-scheduler-0\" (UID: \"a54f13da-aab9-4190-8ddd-2537836ce0d9\") " pod="openstack/cinder-scheduler-0" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.776632 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a54f13da-aab9-4190-8ddd-2537836ce0d9-config-data\") pod \"cinder-scheduler-0\" (UID: \"a54f13da-aab9-4190-8ddd-2537836ce0d9\") " pod="openstack/cinder-scheduler-0" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.776778 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a54f13da-aab9-4190-8ddd-2537836ce0d9-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a54f13da-aab9-4190-8ddd-2537836ce0d9\") " pod="openstack/cinder-scheduler-0" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.778446 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a54f13da-aab9-4190-8ddd-2537836ce0d9-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a54f13da-aab9-4190-8ddd-2537836ce0d9\") " pod="openstack/cinder-scheduler-0" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.797800 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctbxc\" (UniqueName: \"kubernetes.io/projected/a54f13da-aab9-4190-8ddd-2537836ce0d9-kube-api-access-ctbxc\") pod \"cinder-scheduler-0\" (UID: \"a54f13da-aab9-4190-8ddd-2537836ce0d9\") " pod="openstack/cinder-scheduler-0" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.799120 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn2tm\" (UniqueName: \"kubernetes.io/projected/a5a72256-21cc-42e2-bdfb-b1d372846404-kube-api-access-zn2tm\") pod \"dnsmasq-dns-695946c66c-cs66k\" (UID: \"a5a72256-21cc-42e2-bdfb-b1d372846404\") " pod="openstack/dnsmasq-dns-695946c66c-cs66k" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.847840 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.863760 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a84d6faa-79e4-428d-b1e8-ac1d30d274aa-config-data\") pod \"cinder-api-0\" (UID: \"a84d6faa-79e4-428d-b1e8-ac1d30d274aa\") " pod="openstack/cinder-api-0" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.863801 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a84d6faa-79e4-428d-b1e8-ac1d30d274aa-logs\") pod \"cinder-api-0\" (UID: \"a84d6faa-79e4-428d-b1e8-ac1d30d274aa\") " pod="openstack/cinder-api-0" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.863904 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a84d6faa-79e4-428d-b1e8-ac1d30d274aa-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a84d6faa-79e4-428d-b1e8-ac1d30d274aa\") " pod="openstack/cinder-api-0" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.863934 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a84d6faa-79e4-428d-b1e8-ac1d30d274aa-scripts\") pod \"cinder-api-0\" (UID: \"a84d6faa-79e4-428d-b1e8-ac1d30d274aa\") " pod="openstack/cinder-api-0" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.863985 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a84d6faa-79e4-428d-b1e8-ac1d30d274aa-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a84d6faa-79e4-428d-b1e8-ac1d30d274aa\") " pod="openstack/cinder-api-0" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.864085 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a84d6faa-79e4-428d-b1e8-ac1d30d274aa-config-data-custom\") pod \"cinder-api-0\" (UID: \"a84d6faa-79e4-428d-b1e8-ac1d30d274aa\") " pod="openstack/cinder-api-0" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.864123 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zq8jp\" (UniqueName: \"kubernetes.io/projected/a84d6faa-79e4-428d-b1e8-ac1d30d274aa-kube-api-access-zq8jp\") pod \"cinder-api-0\" (UID: \"a84d6faa-79e4-428d-b1e8-ac1d30d274aa\") " pod="openstack/cinder-api-0" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.864717 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a84d6faa-79e4-428d-b1e8-ac1d30d274aa-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a84d6faa-79e4-428d-b1e8-ac1d30d274aa\") " pod="openstack/cinder-api-0" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.865082 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a84d6faa-79e4-428d-b1e8-ac1d30d274aa-logs\") pod \"cinder-api-0\" (UID: \"a84d6faa-79e4-428d-b1e8-ac1d30d274aa\") " pod="openstack/cinder-api-0" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.872479 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-695946c66c-cs66k" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.872894 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a84d6faa-79e4-428d-b1e8-ac1d30d274aa-scripts\") pod \"cinder-api-0\" (UID: \"a84d6faa-79e4-428d-b1e8-ac1d30d274aa\") " pod="openstack/cinder-api-0" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.875879 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a84d6faa-79e4-428d-b1e8-ac1d30d274aa-config-data\") pod \"cinder-api-0\" (UID: \"a84d6faa-79e4-428d-b1e8-ac1d30d274aa\") " pod="openstack/cinder-api-0" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.877556 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a84d6faa-79e4-428d-b1e8-ac1d30d274aa-config-data-custom\") pod \"cinder-api-0\" (UID: \"a84d6faa-79e4-428d-b1e8-ac1d30d274aa\") " pod="openstack/cinder-api-0" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.890123 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a84d6faa-79e4-428d-b1e8-ac1d30d274aa-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a84d6faa-79e4-428d-b1e8-ac1d30d274aa\") " pod="openstack/cinder-api-0" Dec 15 05:52:56 crc kubenswrapper[4747]: I1215 05:52:56.898239 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zq8jp\" (UniqueName: \"kubernetes.io/projected/a84d6faa-79e4-428d-b1e8-ac1d30d274aa-kube-api-access-zq8jp\") pod \"cinder-api-0\" (UID: \"a84d6faa-79e4-428d-b1e8-ac1d30d274aa\") " pod="openstack/cinder-api-0" Dec 15 05:52:57 crc kubenswrapper[4747]: I1215 05:52:57.045991 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 15 05:52:57 crc kubenswrapper[4747]: I1215 05:52:57.253460 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bdd70cf4-833d-4f42-b330-18deb7418bb2","Type":"ContainerStarted","Data":"fffdbea136fb9d8541a4d2f0b764d9a1c5a31445a85e79ae1c0e202bc75bb81a"} Dec 15 05:52:57 crc kubenswrapper[4747]: I1215 05:52:57.385654 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 15 05:52:57 crc kubenswrapper[4747]: I1215 05:52:57.512675 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-695946c66c-cs66k"] Dec 15 05:52:57 crc kubenswrapper[4747]: I1215 05:52:57.619971 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 15 05:52:58 crc kubenswrapper[4747]: I1215 05:52:58.263960 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a84d6faa-79e4-428d-b1e8-ac1d30d274aa","Type":"ContainerStarted","Data":"f64e78515b9cccf9c86e3155fafdf43bdc0eb8e178d2a78e9a6f21ec70abc3a8"} Dec 15 05:52:58 crc kubenswrapper[4747]: I1215 05:52:58.264480 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a84d6faa-79e4-428d-b1e8-ac1d30d274aa","Type":"ContainerStarted","Data":"fffe07b13b1d82aabad81b86c74c9d9c8102a0781db1ec4c287886f397a942de"} Dec 15 05:52:58 crc kubenswrapper[4747]: I1215 05:52:58.265175 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a54f13da-aab9-4190-8ddd-2537836ce0d9","Type":"ContainerStarted","Data":"4e484b737cb3da489f4c7b8a5b8c52525f23c53970249191d9f5aab5ff98a9a5"} Dec 15 05:52:58 crc kubenswrapper[4747]: I1215 05:52:58.267012 4747 generic.go:334] "Generic (PLEG): container finished" podID="b6cfb859-aec3-41c6-bb59-7e84b23396bd" containerID="41a17a55928b61d3480f0803e540c0ff36e7018c2c92f3f46c56a1c54f86dd4d" exitCode=0 Dec 15 05:52:58 crc kubenswrapper[4747]: I1215 05:52:58.267069 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77755588cd-rgjzw" event={"ID":"b6cfb859-aec3-41c6-bb59-7e84b23396bd","Type":"ContainerDied","Data":"41a17a55928b61d3480f0803e540c0ff36e7018c2c92f3f46c56a1c54f86dd4d"} Dec 15 05:52:58 crc kubenswrapper[4747]: I1215 05:52:58.270320 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bdd70cf4-833d-4f42-b330-18deb7418bb2","Type":"ContainerStarted","Data":"6b2524aa5d7017805bf20260665d8476711b068229db516c11f12fa0be79bee9"} Dec 15 05:52:58 crc kubenswrapper[4747]: I1215 05:52:58.272403 4747 generic.go:334] "Generic (PLEG): container finished" podID="a5a72256-21cc-42e2-bdfb-b1d372846404" containerID="a1652dd51f7a8891b34de564181e5e71553328b1c275de1fecb85666487ab0e4" exitCode=0 Dec 15 05:52:58 crc kubenswrapper[4747]: I1215 05:52:58.272437 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-695946c66c-cs66k" event={"ID":"a5a72256-21cc-42e2-bdfb-b1d372846404","Type":"ContainerDied","Data":"a1652dd51f7a8891b34de564181e5e71553328b1c275de1fecb85666487ab0e4"} Dec 15 05:52:58 crc kubenswrapper[4747]: I1215 05:52:58.272455 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-695946c66c-cs66k" event={"ID":"a5a72256-21cc-42e2-bdfb-b1d372846404","Type":"ContainerStarted","Data":"b94198a122e1e6a5915f96cc2fc6bc507b23eaef1eddb20a810e6bd9d64a380b"} Dec 15 05:52:58 crc kubenswrapper[4747]: I1215 05:52:58.507492 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77755588cd-rgjzw" Dec 15 05:52:58 crc kubenswrapper[4747]: I1215 05:52:58.603217 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 15 05:52:58 crc kubenswrapper[4747]: I1215 05:52:58.607523 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b6cfb859-aec3-41c6-bb59-7e84b23396bd-httpd-config\") pod \"b6cfb859-aec3-41c6-bb59-7e84b23396bd\" (UID: \"b6cfb859-aec3-41c6-bb59-7e84b23396bd\") " Dec 15 05:52:58 crc kubenswrapper[4747]: I1215 05:52:58.607643 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kdxz\" (UniqueName: \"kubernetes.io/projected/b6cfb859-aec3-41c6-bb59-7e84b23396bd-kube-api-access-6kdxz\") pod \"b6cfb859-aec3-41c6-bb59-7e84b23396bd\" (UID: \"b6cfb859-aec3-41c6-bb59-7e84b23396bd\") " Dec 15 05:52:58 crc kubenswrapper[4747]: I1215 05:52:58.607740 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b6cfb859-aec3-41c6-bb59-7e84b23396bd-config\") pod \"b6cfb859-aec3-41c6-bb59-7e84b23396bd\" (UID: \"b6cfb859-aec3-41c6-bb59-7e84b23396bd\") " Dec 15 05:52:58 crc kubenswrapper[4747]: I1215 05:52:58.607808 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6cfb859-aec3-41c6-bb59-7e84b23396bd-ovndb-tls-certs\") pod \"b6cfb859-aec3-41c6-bb59-7e84b23396bd\" (UID: \"b6cfb859-aec3-41c6-bb59-7e84b23396bd\") " Dec 15 05:52:58 crc kubenswrapper[4747]: I1215 05:52:58.611498 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6cfb859-aec3-41c6-bb59-7e84b23396bd-combined-ca-bundle\") pod \"b6cfb859-aec3-41c6-bb59-7e84b23396bd\" (UID: \"b6cfb859-aec3-41c6-bb59-7e84b23396bd\") " Dec 15 05:52:58 crc kubenswrapper[4747]: I1215 05:52:58.664353 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cfb859-aec3-41c6-bb59-7e84b23396bd-kube-api-access-6kdxz" (OuterVolumeSpecName: "kube-api-access-6kdxz") pod "b6cfb859-aec3-41c6-bb59-7e84b23396bd" (UID: "b6cfb859-aec3-41c6-bb59-7e84b23396bd"). InnerVolumeSpecName "kube-api-access-6kdxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:52:58 crc kubenswrapper[4747]: I1215 05:52:58.678099 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cfb859-aec3-41c6-bb59-7e84b23396bd-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "b6cfb859-aec3-41c6-bb59-7e84b23396bd" (UID: "b6cfb859-aec3-41c6-bb59-7e84b23396bd"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:52:58 crc kubenswrapper[4747]: I1215 05:52:58.702992 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cfb859-aec3-41c6-bb59-7e84b23396bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b6cfb859-aec3-41c6-bb59-7e84b23396bd" (UID: "b6cfb859-aec3-41c6-bb59-7e84b23396bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:52:58 crc kubenswrapper[4747]: I1215 05:52:58.714323 4747 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b6cfb859-aec3-41c6-bb59-7e84b23396bd-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:58 crc kubenswrapper[4747]: I1215 05:52:58.714347 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kdxz\" (UniqueName: \"kubernetes.io/projected/b6cfb859-aec3-41c6-bb59-7e84b23396bd-kube-api-access-6kdxz\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:58 crc kubenswrapper[4747]: I1215 05:52:58.714359 4747 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6cfb859-aec3-41c6-bb59-7e84b23396bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:58 crc kubenswrapper[4747]: I1215 05:52:58.773084 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cfb859-aec3-41c6-bb59-7e84b23396bd-config" (OuterVolumeSpecName: "config") pod "b6cfb859-aec3-41c6-bb59-7e84b23396bd" (UID: "b6cfb859-aec3-41c6-bb59-7e84b23396bd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:52:58 crc kubenswrapper[4747]: I1215 05:52:58.809087 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cfb859-aec3-41c6-bb59-7e84b23396bd-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "b6cfb859-aec3-41c6-bb59-7e84b23396bd" (UID: "b6cfb859-aec3-41c6-bb59-7e84b23396bd"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:52:58 crc kubenswrapper[4747]: I1215 05:52:58.827631 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b6cfb859-aec3-41c6-bb59-7e84b23396bd-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:58 crc kubenswrapper[4747]: I1215 05:52:58.827663 4747 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6cfb859-aec3-41c6-bb59-7e84b23396bd-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 15 05:52:59 crc kubenswrapper[4747]: I1215 05:52:59.296183 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-695946c66c-cs66k" event={"ID":"a5a72256-21cc-42e2-bdfb-b1d372846404","Type":"ContainerStarted","Data":"a9aeda417c28e50e39fd3ad75c3cf3ffd9ceaed4277e8b7caa6adbd17ade0f1e"} Dec 15 05:52:59 crc kubenswrapper[4747]: I1215 05:52:59.296564 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-695946c66c-cs66k" Dec 15 05:52:59 crc kubenswrapper[4747]: I1215 05:52:59.298753 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a54f13da-aab9-4190-8ddd-2537836ce0d9","Type":"ContainerStarted","Data":"b07fec422dd83412a0863895c073d1eef2f7ca8cecead159bef1784412b00d4d"} Dec 15 05:52:59 crc kubenswrapper[4747]: I1215 05:52:59.303978 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77755588cd-rgjzw" event={"ID":"b6cfb859-aec3-41c6-bb59-7e84b23396bd","Type":"ContainerDied","Data":"22437902654a92043637d8cbcabfea6f2dc3a45f5052e61cb0bdd0607178d952"} Dec 15 05:52:59 crc kubenswrapper[4747]: I1215 05:52:59.304023 4747 scope.go:117] "RemoveContainer" containerID="1301cbc440a4c64083f43926b1093ad190605f737fcb696187c0c22422a5854f" Dec 15 05:52:59 crc kubenswrapper[4747]: I1215 05:52:59.304032 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77755588cd-rgjzw" Dec 15 05:52:59 crc kubenswrapper[4747]: I1215 05:52:59.316943 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-695946c66c-cs66k" podStartSLOduration=3.316919581 podStartE2EDuration="3.316919581s" podCreationTimestamp="2025-12-15 05:52:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:52:59.315455859 +0000 UTC m=+943.011967776" watchObservedRunningTime="2025-12-15 05:52:59.316919581 +0000 UTC m=+943.013431498" Dec 15 05:52:59 crc kubenswrapper[4747]: I1215 05:52:59.348805 4747 scope.go:117] "RemoveContainer" containerID="41a17a55928b61d3480f0803e540c0ff36e7018c2c92f3f46c56a1c54f86dd4d" Dec 15 05:52:59 crc kubenswrapper[4747]: I1215 05:52:59.348916 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-77755588cd-rgjzw"] Dec 15 05:52:59 crc kubenswrapper[4747]: I1215 05:52:59.355947 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-77755588cd-rgjzw"] Dec 15 05:53:00 crc kubenswrapper[4747]: I1215 05:53:00.315920 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a84d6faa-79e4-428d-b1e8-ac1d30d274aa","Type":"ContainerStarted","Data":"bf0c6b9388aae7bb20189e6f6465825681798f794be5b5cd14131e37f7b74e58"} Dec 15 05:53:00 crc kubenswrapper[4747]: I1215 05:53:00.316569 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 15 05:53:00 crc kubenswrapper[4747]: I1215 05:53:00.316124 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="a84d6faa-79e4-428d-b1e8-ac1d30d274aa" containerName="cinder-api-log" containerID="cri-o://f64e78515b9cccf9c86e3155fafdf43bdc0eb8e178d2a78e9a6f21ec70abc3a8" gracePeriod=30 Dec 15 05:53:00 crc kubenswrapper[4747]: I1215 05:53:00.316176 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="a84d6faa-79e4-428d-b1e8-ac1d30d274aa" containerName="cinder-api" containerID="cri-o://bf0c6b9388aae7bb20189e6f6465825681798f794be5b5cd14131e37f7b74e58" gracePeriod=30 Dec 15 05:53:00 crc kubenswrapper[4747]: I1215 05:53:00.319465 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a54f13da-aab9-4190-8ddd-2537836ce0d9","Type":"ContainerStarted","Data":"2c5c210ad8b15bc7d9db0919194d25b20df26bad178aeaf1f86e34170c57693a"} Dec 15 05:53:00 crc kubenswrapper[4747]: I1215 05:53:00.332802 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bdd70cf4-833d-4f42-b330-18deb7418bb2","Type":"ContainerStarted","Data":"15be3e8623c592482be8fabb0d6cd06e494575327ab95ae3b63e7f5f28913c69"} Dec 15 05:53:00 crc kubenswrapper[4747]: I1215 05:53:00.332983 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 15 05:53:00 crc kubenswrapper[4747]: I1215 05:53:00.339659 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.339638413 podStartE2EDuration="4.339638413s" podCreationTimestamp="2025-12-15 05:52:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:53:00.334455866 +0000 UTC m=+944.030967783" watchObservedRunningTime="2025-12-15 05:53:00.339638413 +0000 UTC m=+944.036150330" Dec 15 05:53:00 crc kubenswrapper[4747]: I1215 05:53:00.376302 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.566430713 podStartE2EDuration="6.376282086s" podCreationTimestamp="2025-12-15 05:52:54 +0000 UTC" firstStartedPulling="2025-12-15 05:52:55.335587037 +0000 UTC m=+939.032098954" lastFinishedPulling="2025-12-15 05:52:59.14543841 +0000 UTC m=+942.841950327" observedRunningTime="2025-12-15 05:53:00.371873164 +0000 UTC m=+944.068385081" watchObservedRunningTime="2025-12-15 05:53:00.376282086 +0000 UTC m=+944.072794003" Dec 15 05:53:00 crc kubenswrapper[4747]: I1215 05:53:00.401955 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.487641677 podStartE2EDuration="4.401941521s" podCreationTimestamp="2025-12-15 05:52:56 +0000 UTC" firstStartedPulling="2025-12-15 05:52:57.399262748 +0000 UTC m=+941.095774665" lastFinishedPulling="2025-12-15 05:52:58.313562603 +0000 UTC m=+942.010074509" observedRunningTime="2025-12-15 05:53:00.388624167 +0000 UTC m=+944.085136074" watchObservedRunningTime="2025-12-15 05:53:00.401941521 +0000 UTC m=+944.098453438" Dec 15 05:53:00 crc kubenswrapper[4747]: I1215 05:53:00.561336 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-79fcd98c9d-ccgjm" Dec 15 05:53:00 crc kubenswrapper[4747]: I1215 05:53:00.609517 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6f4bb4686b-tlv2x"] Dec 15 05:53:00 crc kubenswrapper[4747]: I1215 05:53:00.609773 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6f4bb4686b-tlv2x" podUID="ed94a5c7-82af-48dc-8592-440d39a321f7" containerName="barbican-api-log" containerID="cri-o://e61dac18edbba4af42f513e64558a8fd72646531c05e6d6bd83f7eb16203ba7a" gracePeriod=30 Dec 15 05:53:00 crc kubenswrapper[4747]: I1215 05:53:00.610129 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6f4bb4686b-tlv2x" podUID="ed94a5c7-82af-48dc-8592-440d39a321f7" containerName="barbican-api" containerID="cri-o://e96e35defa31d4421e562086962fa2ef685429ede9b83540bcc32b54739cf01e" gracePeriod=30 Dec 15 05:53:00 crc kubenswrapper[4747]: I1215 05:53:00.653266 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cfb859-aec3-41c6-bb59-7e84b23396bd" path="/var/lib/kubelet/pods/b6cfb859-aec3-41c6-bb59-7e84b23396bd/volumes" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.033528 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.191629 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a84d6faa-79e4-428d-b1e8-ac1d30d274aa-etc-machine-id\") pod \"a84d6faa-79e4-428d-b1e8-ac1d30d274aa\" (UID: \"a84d6faa-79e4-428d-b1e8-ac1d30d274aa\") " Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.191704 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zq8jp\" (UniqueName: \"kubernetes.io/projected/a84d6faa-79e4-428d-b1e8-ac1d30d274aa-kube-api-access-zq8jp\") pod \"a84d6faa-79e4-428d-b1e8-ac1d30d274aa\" (UID: \"a84d6faa-79e4-428d-b1e8-ac1d30d274aa\") " Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.191726 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a84d6faa-79e4-428d-b1e8-ac1d30d274aa-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a84d6faa-79e4-428d-b1e8-ac1d30d274aa" (UID: "a84d6faa-79e4-428d-b1e8-ac1d30d274aa"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.191785 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a84d6faa-79e4-428d-b1e8-ac1d30d274aa-config-data-custom\") pod \"a84d6faa-79e4-428d-b1e8-ac1d30d274aa\" (UID: \"a84d6faa-79e4-428d-b1e8-ac1d30d274aa\") " Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.191835 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a84d6faa-79e4-428d-b1e8-ac1d30d274aa-logs\") pod \"a84d6faa-79e4-428d-b1e8-ac1d30d274aa\" (UID: \"a84d6faa-79e4-428d-b1e8-ac1d30d274aa\") " Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.191874 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a84d6faa-79e4-428d-b1e8-ac1d30d274aa-scripts\") pod \"a84d6faa-79e4-428d-b1e8-ac1d30d274aa\" (UID: \"a84d6faa-79e4-428d-b1e8-ac1d30d274aa\") " Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.191945 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a84d6faa-79e4-428d-b1e8-ac1d30d274aa-config-data\") pod \"a84d6faa-79e4-428d-b1e8-ac1d30d274aa\" (UID: \"a84d6faa-79e4-428d-b1e8-ac1d30d274aa\") " Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.191982 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a84d6faa-79e4-428d-b1e8-ac1d30d274aa-combined-ca-bundle\") pod \"a84d6faa-79e4-428d-b1e8-ac1d30d274aa\" (UID: \"a84d6faa-79e4-428d-b1e8-ac1d30d274aa\") " Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.193335 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a84d6faa-79e4-428d-b1e8-ac1d30d274aa-logs" (OuterVolumeSpecName: "logs") pod "a84d6faa-79e4-428d-b1e8-ac1d30d274aa" (UID: "a84d6faa-79e4-428d-b1e8-ac1d30d274aa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.194207 4747 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a84d6faa-79e4-428d-b1e8-ac1d30d274aa-logs\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.194226 4747 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a84d6faa-79e4-428d-b1e8-ac1d30d274aa-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.199982 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a84d6faa-79e4-428d-b1e8-ac1d30d274aa-scripts" (OuterVolumeSpecName: "scripts") pod "a84d6faa-79e4-428d-b1e8-ac1d30d274aa" (UID: "a84d6faa-79e4-428d-b1e8-ac1d30d274aa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.200498 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a84d6faa-79e4-428d-b1e8-ac1d30d274aa-kube-api-access-zq8jp" (OuterVolumeSpecName: "kube-api-access-zq8jp") pod "a84d6faa-79e4-428d-b1e8-ac1d30d274aa" (UID: "a84d6faa-79e4-428d-b1e8-ac1d30d274aa"). InnerVolumeSpecName "kube-api-access-zq8jp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.206017 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a84d6faa-79e4-428d-b1e8-ac1d30d274aa-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a84d6faa-79e4-428d-b1e8-ac1d30d274aa" (UID: "a84d6faa-79e4-428d-b1e8-ac1d30d274aa"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.220284 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a84d6faa-79e4-428d-b1e8-ac1d30d274aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a84d6faa-79e4-428d-b1e8-ac1d30d274aa" (UID: "a84d6faa-79e4-428d-b1e8-ac1d30d274aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.242231 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a84d6faa-79e4-428d-b1e8-ac1d30d274aa-config-data" (OuterVolumeSpecName: "config-data") pod "a84d6faa-79e4-428d-b1e8-ac1d30d274aa" (UID: "a84d6faa-79e4-428d-b1e8-ac1d30d274aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.296899 4747 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a84d6faa-79e4-428d-b1e8-ac1d30d274aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.296948 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zq8jp\" (UniqueName: \"kubernetes.io/projected/a84d6faa-79e4-428d-b1e8-ac1d30d274aa-kube-api-access-zq8jp\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.296970 4747 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a84d6faa-79e4-428d-b1e8-ac1d30d274aa-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.296982 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a84d6faa-79e4-428d-b1e8-ac1d30d274aa-scripts\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.296993 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a84d6faa-79e4-428d-b1e8-ac1d30d274aa-config-data\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.344566 4747 generic.go:334] "Generic (PLEG): container finished" podID="ed94a5c7-82af-48dc-8592-440d39a321f7" containerID="e61dac18edbba4af42f513e64558a8fd72646531c05e6d6bd83f7eb16203ba7a" exitCode=143 Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.344629 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f4bb4686b-tlv2x" event={"ID":"ed94a5c7-82af-48dc-8592-440d39a321f7","Type":"ContainerDied","Data":"e61dac18edbba4af42f513e64558a8fd72646531c05e6d6bd83f7eb16203ba7a"} Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.345970 4747 generic.go:334] "Generic (PLEG): container finished" podID="a84d6faa-79e4-428d-b1e8-ac1d30d274aa" containerID="bf0c6b9388aae7bb20189e6f6465825681798f794be5b5cd14131e37f7b74e58" exitCode=0 Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.345996 4747 generic.go:334] "Generic (PLEG): container finished" podID="a84d6faa-79e4-428d-b1e8-ac1d30d274aa" containerID="f64e78515b9cccf9c86e3155fafdf43bdc0eb8e178d2a78e9a6f21ec70abc3a8" exitCode=143 Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.347116 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.354756 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a84d6faa-79e4-428d-b1e8-ac1d30d274aa","Type":"ContainerDied","Data":"bf0c6b9388aae7bb20189e6f6465825681798f794be5b5cd14131e37f7b74e58"} Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.355703 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a84d6faa-79e4-428d-b1e8-ac1d30d274aa","Type":"ContainerDied","Data":"f64e78515b9cccf9c86e3155fafdf43bdc0eb8e178d2a78e9a6f21ec70abc3a8"} Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.355733 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a84d6faa-79e4-428d-b1e8-ac1d30d274aa","Type":"ContainerDied","Data":"fffe07b13b1d82aabad81b86c74c9d9c8102a0781db1ec4c287886f397a942de"} Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.356276 4747 scope.go:117] "RemoveContainer" containerID="bf0c6b9388aae7bb20189e6f6465825681798f794be5b5cd14131e37f7b74e58" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.381291 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.387659 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.392638 4747 scope.go:117] "RemoveContainer" containerID="f64e78515b9cccf9c86e3155fafdf43bdc0eb8e178d2a78e9a6f21ec70abc3a8" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.402824 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 15 05:53:01 crc kubenswrapper[4747]: E1215 05:53:01.403278 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6cfb859-aec3-41c6-bb59-7e84b23396bd" containerName="neutron-api" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.403300 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6cfb859-aec3-41c6-bb59-7e84b23396bd" containerName="neutron-api" Dec 15 05:53:01 crc kubenswrapper[4747]: E1215 05:53:01.403334 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6cfb859-aec3-41c6-bb59-7e84b23396bd" containerName="neutron-httpd" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.403340 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6cfb859-aec3-41c6-bb59-7e84b23396bd" containerName="neutron-httpd" Dec 15 05:53:01 crc kubenswrapper[4747]: E1215 05:53:01.403358 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a84d6faa-79e4-428d-b1e8-ac1d30d274aa" containerName="cinder-api" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.403365 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="a84d6faa-79e4-428d-b1e8-ac1d30d274aa" containerName="cinder-api" Dec 15 05:53:01 crc kubenswrapper[4747]: E1215 05:53:01.403374 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a84d6faa-79e4-428d-b1e8-ac1d30d274aa" containerName="cinder-api-log" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.403380 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="a84d6faa-79e4-428d-b1e8-ac1d30d274aa" containerName="cinder-api-log" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.403564 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="a84d6faa-79e4-428d-b1e8-ac1d30d274aa" containerName="cinder-api" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.403596 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6cfb859-aec3-41c6-bb59-7e84b23396bd" containerName="neutron-api" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.403606 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="a84d6faa-79e4-428d-b1e8-ac1d30d274aa" containerName="cinder-api-log" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.403617 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6cfb859-aec3-41c6-bb59-7e84b23396bd" containerName="neutron-httpd" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.404620 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.407203 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.407260 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.408124 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.414052 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.420006 4747 scope.go:117] "RemoveContainer" containerID="bf0c6b9388aae7bb20189e6f6465825681798f794be5b5cd14131e37f7b74e58" Dec 15 05:53:01 crc kubenswrapper[4747]: E1215 05:53:01.422040 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf0c6b9388aae7bb20189e6f6465825681798f794be5b5cd14131e37f7b74e58\": container with ID starting with bf0c6b9388aae7bb20189e6f6465825681798f794be5b5cd14131e37f7b74e58 not found: ID does not exist" containerID="bf0c6b9388aae7bb20189e6f6465825681798f794be5b5cd14131e37f7b74e58" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.422201 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf0c6b9388aae7bb20189e6f6465825681798f794be5b5cd14131e37f7b74e58"} err="failed to get container status \"bf0c6b9388aae7bb20189e6f6465825681798f794be5b5cd14131e37f7b74e58\": rpc error: code = NotFound desc = could not find container \"bf0c6b9388aae7bb20189e6f6465825681798f794be5b5cd14131e37f7b74e58\": container with ID starting with bf0c6b9388aae7bb20189e6f6465825681798f794be5b5cd14131e37f7b74e58 not found: ID does not exist" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.422326 4747 scope.go:117] "RemoveContainer" containerID="f64e78515b9cccf9c86e3155fafdf43bdc0eb8e178d2a78e9a6f21ec70abc3a8" Dec 15 05:53:01 crc kubenswrapper[4747]: E1215 05:53:01.422952 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f64e78515b9cccf9c86e3155fafdf43bdc0eb8e178d2a78e9a6f21ec70abc3a8\": container with ID starting with f64e78515b9cccf9c86e3155fafdf43bdc0eb8e178d2a78e9a6f21ec70abc3a8 not found: ID does not exist" containerID="f64e78515b9cccf9c86e3155fafdf43bdc0eb8e178d2a78e9a6f21ec70abc3a8" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.422985 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f64e78515b9cccf9c86e3155fafdf43bdc0eb8e178d2a78e9a6f21ec70abc3a8"} err="failed to get container status \"f64e78515b9cccf9c86e3155fafdf43bdc0eb8e178d2a78e9a6f21ec70abc3a8\": rpc error: code = NotFound desc = could not find container \"f64e78515b9cccf9c86e3155fafdf43bdc0eb8e178d2a78e9a6f21ec70abc3a8\": container with ID starting with f64e78515b9cccf9c86e3155fafdf43bdc0eb8e178d2a78e9a6f21ec70abc3a8 not found: ID does not exist" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.423010 4747 scope.go:117] "RemoveContainer" containerID="bf0c6b9388aae7bb20189e6f6465825681798f794be5b5cd14131e37f7b74e58" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.423784 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf0c6b9388aae7bb20189e6f6465825681798f794be5b5cd14131e37f7b74e58"} err="failed to get container status \"bf0c6b9388aae7bb20189e6f6465825681798f794be5b5cd14131e37f7b74e58\": rpc error: code = NotFound desc = could not find container \"bf0c6b9388aae7bb20189e6f6465825681798f794be5b5cd14131e37f7b74e58\": container with ID starting with bf0c6b9388aae7bb20189e6f6465825681798f794be5b5cd14131e37f7b74e58 not found: ID does not exist" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.423965 4747 scope.go:117] "RemoveContainer" containerID="f64e78515b9cccf9c86e3155fafdf43bdc0eb8e178d2a78e9a6f21ec70abc3a8" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.424404 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f64e78515b9cccf9c86e3155fafdf43bdc0eb8e178d2a78e9a6f21ec70abc3a8"} err="failed to get container status \"f64e78515b9cccf9c86e3155fafdf43bdc0eb8e178d2a78e9a6f21ec70abc3a8\": rpc error: code = NotFound desc = could not find container \"f64e78515b9cccf9c86e3155fafdf43bdc0eb8e178d2a78e9a6f21ec70abc3a8\": container with ID starting with f64e78515b9cccf9c86e3155fafdf43bdc0eb8e178d2a78e9a6f21ec70abc3a8 not found: ID does not exist" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.500448 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f66857a8-55e6-4e4f-ba1c-23bc5afec36b-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f66857a8-55e6-4e4f-ba1c-23bc5afec36b\") " pod="openstack/cinder-api-0" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.500488 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f66857a8-55e6-4e4f-ba1c-23bc5afec36b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f66857a8-55e6-4e4f-ba1c-23bc5afec36b\") " pod="openstack/cinder-api-0" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.500521 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f66857a8-55e6-4e4f-ba1c-23bc5afec36b-logs\") pod \"cinder-api-0\" (UID: \"f66857a8-55e6-4e4f-ba1c-23bc5afec36b\") " pod="openstack/cinder-api-0" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.500607 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f66857a8-55e6-4e4f-ba1c-23bc5afec36b-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f66857a8-55e6-4e4f-ba1c-23bc5afec36b\") " pod="openstack/cinder-api-0" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.500696 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f66857a8-55e6-4e4f-ba1c-23bc5afec36b-config-data\") pod \"cinder-api-0\" (UID: \"f66857a8-55e6-4e4f-ba1c-23bc5afec36b\") " pod="openstack/cinder-api-0" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.500721 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f66857a8-55e6-4e4f-ba1c-23bc5afec36b-scripts\") pod \"cinder-api-0\" (UID: \"f66857a8-55e6-4e4f-ba1c-23bc5afec36b\") " pod="openstack/cinder-api-0" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.500741 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f66857a8-55e6-4e4f-ba1c-23bc5afec36b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f66857a8-55e6-4e4f-ba1c-23bc5afec36b\") " pod="openstack/cinder-api-0" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.500778 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f66857a8-55e6-4e4f-ba1c-23bc5afec36b-config-data-custom\") pod \"cinder-api-0\" (UID: \"f66857a8-55e6-4e4f-ba1c-23bc5afec36b\") " pod="openstack/cinder-api-0" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.500811 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flccz\" (UniqueName: \"kubernetes.io/projected/f66857a8-55e6-4e4f-ba1c-23bc5afec36b-kube-api-access-flccz\") pod \"cinder-api-0\" (UID: \"f66857a8-55e6-4e4f-ba1c-23bc5afec36b\") " pod="openstack/cinder-api-0" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.604954 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f66857a8-55e6-4e4f-ba1c-23bc5afec36b-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f66857a8-55e6-4e4f-ba1c-23bc5afec36b\") " pod="openstack/cinder-api-0" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.605094 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f66857a8-55e6-4e4f-ba1c-23bc5afec36b-config-data\") pod \"cinder-api-0\" (UID: \"f66857a8-55e6-4e4f-ba1c-23bc5afec36b\") " pod="openstack/cinder-api-0" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.605120 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f66857a8-55e6-4e4f-ba1c-23bc5afec36b-scripts\") pod \"cinder-api-0\" (UID: \"f66857a8-55e6-4e4f-ba1c-23bc5afec36b\") " pod="openstack/cinder-api-0" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.605139 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f66857a8-55e6-4e4f-ba1c-23bc5afec36b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f66857a8-55e6-4e4f-ba1c-23bc5afec36b\") " pod="openstack/cinder-api-0" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.605198 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f66857a8-55e6-4e4f-ba1c-23bc5afec36b-config-data-custom\") pod \"cinder-api-0\" (UID: \"f66857a8-55e6-4e4f-ba1c-23bc5afec36b\") " pod="openstack/cinder-api-0" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.605248 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flccz\" (UniqueName: \"kubernetes.io/projected/f66857a8-55e6-4e4f-ba1c-23bc5afec36b-kube-api-access-flccz\") pod \"cinder-api-0\" (UID: \"f66857a8-55e6-4e4f-ba1c-23bc5afec36b\") " pod="openstack/cinder-api-0" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.605422 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f66857a8-55e6-4e4f-ba1c-23bc5afec36b-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f66857a8-55e6-4e4f-ba1c-23bc5afec36b\") " pod="openstack/cinder-api-0" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.605454 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f66857a8-55e6-4e4f-ba1c-23bc5afec36b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f66857a8-55e6-4e4f-ba1c-23bc5afec36b\") " pod="openstack/cinder-api-0" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.605485 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f66857a8-55e6-4e4f-ba1c-23bc5afec36b-logs\") pod \"cinder-api-0\" (UID: \"f66857a8-55e6-4e4f-ba1c-23bc5afec36b\") " pod="openstack/cinder-api-0" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.606014 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f66857a8-55e6-4e4f-ba1c-23bc5afec36b-logs\") pod \"cinder-api-0\" (UID: \"f66857a8-55e6-4e4f-ba1c-23bc5afec36b\") " pod="openstack/cinder-api-0" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.606342 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f66857a8-55e6-4e4f-ba1c-23bc5afec36b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f66857a8-55e6-4e4f-ba1c-23bc5afec36b\") " pod="openstack/cinder-api-0" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.609846 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f66857a8-55e6-4e4f-ba1c-23bc5afec36b-scripts\") pod \"cinder-api-0\" (UID: \"f66857a8-55e6-4e4f-ba1c-23bc5afec36b\") " pod="openstack/cinder-api-0" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.610644 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f66857a8-55e6-4e4f-ba1c-23bc5afec36b-config-data\") pod \"cinder-api-0\" (UID: \"f66857a8-55e6-4e4f-ba1c-23bc5afec36b\") " pod="openstack/cinder-api-0" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.612547 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f66857a8-55e6-4e4f-ba1c-23bc5afec36b-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f66857a8-55e6-4e4f-ba1c-23bc5afec36b\") " pod="openstack/cinder-api-0" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.612666 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f66857a8-55e6-4e4f-ba1c-23bc5afec36b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f66857a8-55e6-4e4f-ba1c-23bc5afec36b\") " pod="openstack/cinder-api-0" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.613463 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f66857a8-55e6-4e4f-ba1c-23bc5afec36b-config-data-custom\") pod \"cinder-api-0\" (UID: \"f66857a8-55e6-4e4f-ba1c-23bc5afec36b\") " pod="openstack/cinder-api-0" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.615915 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f66857a8-55e6-4e4f-ba1c-23bc5afec36b-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f66857a8-55e6-4e4f-ba1c-23bc5afec36b\") " pod="openstack/cinder-api-0" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.632574 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flccz\" (UniqueName: \"kubernetes.io/projected/f66857a8-55e6-4e4f-ba1c-23bc5afec36b-kube-api-access-flccz\") pod \"cinder-api-0\" (UID: \"f66857a8-55e6-4e4f-ba1c-23bc5afec36b\") " pod="openstack/cinder-api-0" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.731032 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 15 05:53:01 crc kubenswrapper[4747]: I1215 05:53:01.848830 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 15 05:53:02 crc kubenswrapper[4747]: I1215 05:53:02.173350 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 15 05:53:02 crc kubenswrapper[4747]: I1215 05:53:02.363104 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f66857a8-55e6-4e4f-ba1c-23bc5afec36b","Type":"ContainerStarted","Data":"92520b5f350c3f920471b045d0e965f62e402c21c09e13db17fbc7677a2ac5da"} Dec 15 05:53:02 crc kubenswrapper[4747]: I1215 05:53:02.639895 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a84d6faa-79e4-428d-b1e8-ac1d30d274aa" path="/var/lib/kubelet/pods/a84d6faa-79e4-428d-b1e8-ac1d30d274aa/volumes" Dec 15 05:53:03 crc kubenswrapper[4747]: I1215 05:53:03.391212 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f66857a8-55e6-4e4f-ba1c-23bc5afec36b","Type":"ContainerStarted","Data":"856fac2e2203e89f63a07df46387f7530020e714508a0d4c34c264222b4b600a"} Dec 15 05:53:03 crc kubenswrapper[4747]: I1215 05:53:03.391988 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f66857a8-55e6-4e4f-ba1c-23bc5afec36b","Type":"ContainerStarted","Data":"629fdd0fe928b0479799bc998d0dd6ed393fa8dee6c6bdfa402db63f1c1b3dff"} Dec 15 05:53:03 crc kubenswrapper[4747]: I1215 05:53:03.392037 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 15 05:53:03 crc kubenswrapper[4747]: I1215 05:53:03.421752 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.421731996 podStartE2EDuration="2.421731996s" podCreationTimestamp="2025-12-15 05:53:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:53:03.408040738 +0000 UTC m=+947.104552655" watchObservedRunningTime="2025-12-15 05:53:03.421731996 +0000 UTC m=+947.118243913" Dec 15 05:53:03 crc kubenswrapper[4747]: I1215 05:53:03.772701 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6f4bb4686b-tlv2x" podUID="ed94a5c7-82af-48dc-8592-440d39a321f7" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.154:9311/healthcheck\": read tcp 10.217.0.2:33962->10.217.0.154:9311: read: connection reset by peer" Dec 15 05:53:03 crc kubenswrapper[4747]: I1215 05:53:03.772757 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6f4bb4686b-tlv2x" podUID="ed94a5c7-82af-48dc-8592-440d39a321f7" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.154:9311/healthcheck\": read tcp 10.217.0.2:33968->10.217.0.154:9311: read: connection reset by peer" Dec 15 05:53:04 crc kubenswrapper[4747]: I1215 05:53:04.154751 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6f4bb4686b-tlv2x" Dec 15 05:53:04 crc kubenswrapper[4747]: I1215 05:53:04.261648 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed94a5c7-82af-48dc-8592-440d39a321f7-logs\") pod \"ed94a5c7-82af-48dc-8592-440d39a321f7\" (UID: \"ed94a5c7-82af-48dc-8592-440d39a321f7\") " Dec 15 05:53:04 crc kubenswrapper[4747]: I1215 05:53:04.262014 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed94a5c7-82af-48dc-8592-440d39a321f7-combined-ca-bundle\") pod \"ed94a5c7-82af-48dc-8592-440d39a321f7\" (UID: \"ed94a5c7-82af-48dc-8592-440d39a321f7\") " Dec 15 05:53:04 crc kubenswrapper[4747]: I1215 05:53:04.262097 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jv2t\" (UniqueName: \"kubernetes.io/projected/ed94a5c7-82af-48dc-8592-440d39a321f7-kube-api-access-5jv2t\") pod \"ed94a5c7-82af-48dc-8592-440d39a321f7\" (UID: \"ed94a5c7-82af-48dc-8592-440d39a321f7\") " Dec 15 05:53:04 crc kubenswrapper[4747]: I1215 05:53:04.262138 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed94a5c7-82af-48dc-8592-440d39a321f7-config-data-custom\") pod \"ed94a5c7-82af-48dc-8592-440d39a321f7\" (UID: \"ed94a5c7-82af-48dc-8592-440d39a321f7\") " Dec 15 05:53:04 crc kubenswrapper[4747]: I1215 05:53:04.262266 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed94a5c7-82af-48dc-8592-440d39a321f7-config-data\") pod \"ed94a5c7-82af-48dc-8592-440d39a321f7\" (UID: \"ed94a5c7-82af-48dc-8592-440d39a321f7\") " Dec 15 05:53:04 crc kubenswrapper[4747]: I1215 05:53:04.262741 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed94a5c7-82af-48dc-8592-440d39a321f7-logs" (OuterVolumeSpecName: "logs") pod "ed94a5c7-82af-48dc-8592-440d39a321f7" (UID: "ed94a5c7-82af-48dc-8592-440d39a321f7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:53:04 crc kubenswrapper[4747]: I1215 05:53:04.263312 4747 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed94a5c7-82af-48dc-8592-440d39a321f7-logs\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:04 crc kubenswrapper[4747]: I1215 05:53:04.270172 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed94a5c7-82af-48dc-8592-440d39a321f7-kube-api-access-5jv2t" (OuterVolumeSpecName: "kube-api-access-5jv2t") pod "ed94a5c7-82af-48dc-8592-440d39a321f7" (UID: "ed94a5c7-82af-48dc-8592-440d39a321f7"). InnerVolumeSpecName "kube-api-access-5jv2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:53:04 crc kubenswrapper[4747]: I1215 05:53:04.270407 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed94a5c7-82af-48dc-8592-440d39a321f7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ed94a5c7-82af-48dc-8592-440d39a321f7" (UID: "ed94a5c7-82af-48dc-8592-440d39a321f7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:53:04 crc kubenswrapper[4747]: I1215 05:53:04.289130 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed94a5c7-82af-48dc-8592-440d39a321f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ed94a5c7-82af-48dc-8592-440d39a321f7" (UID: "ed94a5c7-82af-48dc-8592-440d39a321f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:53:04 crc kubenswrapper[4747]: I1215 05:53:04.306395 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed94a5c7-82af-48dc-8592-440d39a321f7-config-data" (OuterVolumeSpecName: "config-data") pod "ed94a5c7-82af-48dc-8592-440d39a321f7" (UID: "ed94a5c7-82af-48dc-8592-440d39a321f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:53:04 crc kubenswrapper[4747]: I1215 05:53:04.367239 4747 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed94a5c7-82af-48dc-8592-440d39a321f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:04 crc kubenswrapper[4747]: I1215 05:53:04.367301 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jv2t\" (UniqueName: \"kubernetes.io/projected/ed94a5c7-82af-48dc-8592-440d39a321f7-kube-api-access-5jv2t\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:04 crc kubenswrapper[4747]: I1215 05:53:04.367330 4747 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed94a5c7-82af-48dc-8592-440d39a321f7-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:04 crc kubenswrapper[4747]: I1215 05:53:04.367340 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed94a5c7-82af-48dc-8592-440d39a321f7-config-data\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:04 crc kubenswrapper[4747]: I1215 05:53:04.402881 4747 generic.go:334] "Generic (PLEG): container finished" podID="ed94a5c7-82af-48dc-8592-440d39a321f7" containerID="e96e35defa31d4421e562086962fa2ef685429ede9b83540bcc32b54739cf01e" exitCode=0 Dec 15 05:53:04 crc kubenswrapper[4747]: I1215 05:53:04.402982 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f4bb4686b-tlv2x" event={"ID":"ed94a5c7-82af-48dc-8592-440d39a321f7","Type":"ContainerDied","Data":"e96e35defa31d4421e562086962fa2ef685429ede9b83540bcc32b54739cf01e"} Dec 15 05:53:04 crc kubenswrapper[4747]: I1215 05:53:04.403087 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f4bb4686b-tlv2x" event={"ID":"ed94a5c7-82af-48dc-8592-440d39a321f7","Type":"ContainerDied","Data":"abe50a760da8f4fd81e6be4e90f89083232e0c9f217fb36d18be3293b4168797"} Dec 15 05:53:04 crc kubenswrapper[4747]: I1215 05:53:04.403009 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6f4bb4686b-tlv2x" Dec 15 05:53:04 crc kubenswrapper[4747]: I1215 05:53:04.403133 4747 scope.go:117] "RemoveContainer" containerID="e96e35defa31d4421e562086962fa2ef685429ede9b83540bcc32b54739cf01e" Dec 15 05:53:04 crc kubenswrapper[4747]: I1215 05:53:04.427231 4747 scope.go:117] "RemoveContainer" containerID="e61dac18edbba4af42f513e64558a8fd72646531c05e6d6bd83f7eb16203ba7a" Dec 15 05:53:04 crc kubenswrapper[4747]: I1215 05:53:04.438224 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6f4bb4686b-tlv2x"] Dec 15 05:53:04 crc kubenswrapper[4747]: I1215 05:53:04.451273 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6f4bb4686b-tlv2x"] Dec 15 05:53:04 crc kubenswrapper[4747]: I1215 05:53:04.453689 4747 scope.go:117] "RemoveContainer" containerID="e96e35defa31d4421e562086962fa2ef685429ede9b83540bcc32b54739cf01e" Dec 15 05:53:04 crc kubenswrapper[4747]: E1215 05:53:04.454147 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e96e35defa31d4421e562086962fa2ef685429ede9b83540bcc32b54739cf01e\": container with ID starting with e96e35defa31d4421e562086962fa2ef685429ede9b83540bcc32b54739cf01e not found: ID does not exist" containerID="e96e35defa31d4421e562086962fa2ef685429ede9b83540bcc32b54739cf01e" Dec 15 05:53:04 crc kubenswrapper[4747]: I1215 05:53:04.454255 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e96e35defa31d4421e562086962fa2ef685429ede9b83540bcc32b54739cf01e"} err="failed to get container status \"e96e35defa31d4421e562086962fa2ef685429ede9b83540bcc32b54739cf01e\": rpc error: code = NotFound desc = could not find container \"e96e35defa31d4421e562086962fa2ef685429ede9b83540bcc32b54739cf01e\": container with ID starting with e96e35defa31d4421e562086962fa2ef685429ede9b83540bcc32b54739cf01e not found: ID does not exist" Dec 15 05:53:04 crc kubenswrapper[4747]: I1215 05:53:04.454356 4747 scope.go:117] "RemoveContainer" containerID="e61dac18edbba4af42f513e64558a8fd72646531c05e6d6bd83f7eb16203ba7a" Dec 15 05:53:04 crc kubenswrapper[4747]: E1215 05:53:04.455028 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e61dac18edbba4af42f513e64558a8fd72646531c05e6d6bd83f7eb16203ba7a\": container with ID starting with e61dac18edbba4af42f513e64558a8fd72646531c05e6d6bd83f7eb16203ba7a not found: ID does not exist" containerID="e61dac18edbba4af42f513e64558a8fd72646531c05e6d6bd83f7eb16203ba7a" Dec 15 05:53:04 crc kubenswrapper[4747]: I1215 05:53:04.455075 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e61dac18edbba4af42f513e64558a8fd72646531c05e6d6bd83f7eb16203ba7a"} err="failed to get container status \"e61dac18edbba4af42f513e64558a8fd72646531c05e6d6bd83f7eb16203ba7a\": rpc error: code = NotFound desc = could not find container \"e61dac18edbba4af42f513e64558a8fd72646531c05e6d6bd83f7eb16203ba7a\": container with ID starting with e61dac18edbba4af42f513e64558a8fd72646531c05e6d6bd83f7eb16203ba7a not found: ID does not exist" Dec 15 05:53:04 crc kubenswrapper[4747]: I1215 05:53:04.649494 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed94a5c7-82af-48dc-8592-440d39a321f7" path="/var/lib/kubelet/pods/ed94a5c7-82af-48dc-8592-440d39a321f7/volumes" Dec 15 05:53:06 crc kubenswrapper[4747]: I1215 05:53:06.837649 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-d77548fc6-2zqkd" Dec 15 05:53:06 crc kubenswrapper[4747]: I1215 05:53:06.841679 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-d77548fc6-2zqkd" Dec 15 05:53:06 crc kubenswrapper[4747]: I1215 05:53:06.875129 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-695946c66c-cs66k" Dec 15 05:53:06 crc kubenswrapper[4747]: I1215 05:53:06.981217 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-796db5f74c-84jzt"] Dec 15 05:53:06 crc kubenswrapper[4747]: I1215 05:53:06.981491 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-796db5f74c-84jzt" podUID="9e10c5cb-1a07-4941-885e-45dc50d15021" containerName="dnsmasq-dns" containerID="cri-o://cf00726260f95e81b5468fd976269014fb809fb1ff6c1dc766ef2e281b331c15" gracePeriod=10 Dec 15 05:53:07 crc kubenswrapper[4747]: I1215 05:53:07.131176 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 15 05:53:07 crc kubenswrapper[4747]: I1215 05:53:07.168496 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 15 05:53:07 crc kubenswrapper[4747]: I1215 05:53:07.435406 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-796db5f74c-84jzt" Dec 15 05:53:07 crc kubenswrapper[4747]: I1215 05:53:07.437134 4747 generic.go:334] "Generic (PLEG): container finished" podID="9e10c5cb-1a07-4941-885e-45dc50d15021" containerID="cf00726260f95e81b5468fd976269014fb809fb1ff6c1dc766ef2e281b331c15" exitCode=0 Dec 15 05:53:07 crc kubenswrapper[4747]: I1215 05:53:07.437204 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-796db5f74c-84jzt" event={"ID":"9e10c5cb-1a07-4941-885e-45dc50d15021","Type":"ContainerDied","Data":"cf00726260f95e81b5468fd976269014fb809fb1ff6c1dc766ef2e281b331c15"} Dec 15 05:53:07 crc kubenswrapper[4747]: I1215 05:53:07.437342 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-796db5f74c-84jzt" event={"ID":"9e10c5cb-1a07-4941-885e-45dc50d15021","Type":"ContainerDied","Data":"dc86bbb85d1a5e9664dbe165e6819c53e31c9f194068d15739383ccd03e92fd2"} Dec 15 05:53:07 crc kubenswrapper[4747]: I1215 05:53:07.437369 4747 scope.go:117] "RemoveContainer" containerID="cf00726260f95e81b5468fd976269014fb809fb1ff6c1dc766ef2e281b331c15" Dec 15 05:53:07 crc kubenswrapper[4747]: I1215 05:53:07.437725 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="a54f13da-aab9-4190-8ddd-2537836ce0d9" containerName="cinder-scheduler" containerID="cri-o://b07fec422dd83412a0863895c073d1eef2f7ca8cecead159bef1784412b00d4d" gracePeriod=30 Dec 15 05:53:07 crc kubenswrapper[4747]: I1215 05:53:07.437806 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="a54f13da-aab9-4190-8ddd-2537836ce0d9" containerName="probe" containerID="cri-o://2c5c210ad8b15bc7d9db0919194d25b20df26bad178aeaf1f86e34170c57693a" gracePeriod=30 Dec 15 05:53:07 crc kubenswrapper[4747]: I1215 05:53:07.462765 4747 scope.go:117] "RemoveContainer" containerID="0e5e3841371bb40dd5f2f2c13f223c9fd3f34055f264aaa575a93b32bb4001a5" Dec 15 05:53:07 crc kubenswrapper[4747]: I1215 05:53:07.482851 4747 scope.go:117] "RemoveContainer" containerID="cf00726260f95e81b5468fd976269014fb809fb1ff6c1dc766ef2e281b331c15" Dec 15 05:53:07 crc kubenswrapper[4747]: E1215 05:53:07.483303 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf00726260f95e81b5468fd976269014fb809fb1ff6c1dc766ef2e281b331c15\": container with ID starting with cf00726260f95e81b5468fd976269014fb809fb1ff6c1dc766ef2e281b331c15 not found: ID does not exist" containerID="cf00726260f95e81b5468fd976269014fb809fb1ff6c1dc766ef2e281b331c15" Dec 15 05:53:07 crc kubenswrapper[4747]: I1215 05:53:07.483334 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf00726260f95e81b5468fd976269014fb809fb1ff6c1dc766ef2e281b331c15"} err="failed to get container status \"cf00726260f95e81b5468fd976269014fb809fb1ff6c1dc766ef2e281b331c15\": rpc error: code = NotFound desc = could not find container \"cf00726260f95e81b5468fd976269014fb809fb1ff6c1dc766ef2e281b331c15\": container with ID starting with cf00726260f95e81b5468fd976269014fb809fb1ff6c1dc766ef2e281b331c15 not found: ID does not exist" Dec 15 05:53:07 crc kubenswrapper[4747]: I1215 05:53:07.483366 4747 scope.go:117] "RemoveContainer" containerID="0e5e3841371bb40dd5f2f2c13f223c9fd3f34055f264aaa575a93b32bb4001a5" Dec 15 05:53:07 crc kubenswrapper[4747]: E1215 05:53:07.483815 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e5e3841371bb40dd5f2f2c13f223c9fd3f34055f264aaa575a93b32bb4001a5\": container with ID starting with 0e5e3841371bb40dd5f2f2c13f223c9fd3f34055f264aaa575a93b32bb4001a5 not found: ID does not exist" containerID="0e5e3841371bb40dd5f2f2c13f223c9fd3f34055f264aaa575a93b32bb4001a5" Dec 15 05:53:07 crc kubenswrapper[4747]: I1215 05:53:07.483866 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e5e3841371bb40dd5f2f2c13f223c9fd3f34055f264aaa575a93b32bb4001a5"} err="failed to get container status \"0e5e3841371bb40dd5f2f2c13f223c9fd3f34055f264aaa575a93b32bb4001a5\": rpc error: code = NotFound desc = could not find container \"0e5e3841371bb40dd5f2f2c13f223c9fd3f34055f264aaa575a93b32bb4001a5\": container with ID starting with 0e5e3841371bb40dd5f2f2c13f223c9fd3f34055f264aaa575a93b32bb4001a5 not found: ID does not exist" Dec 15 05:53:07 crc kubenswrapper[4747]: I1215 05:53:07.535980 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e10c5cb-1a07-4941-885e-45dc50d15021-config\") pod \"9e10c5cb-1a07-4941-885e-45dc50d15021\" (UID: \"9e10c5cb-1a07-4941-885e-45dc50d15021\") " Dec 15 05:53:07 crc kubenswrapper[4747]: I1215 05:53:07.536077 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e10c5cb-1a07-4941-885e-45dc50d15021-dns-svc\") pod \"9e10c5cb-1a07-4941-885e-45dc50d15021\" (UID: \"9e10c5cb-1a07-4941-885e-45dc50d15021\") " Dec 15 05:53:07 crc kubenswrapper[4747]: I1215 05:53:07.536110 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xt8tn\" (UniqueName: \"kubernetes.io/projected/9e10c5cb-1a07-4941-885e-45dc50d15021-kube-api-access-xt8tn\") pod \"9e10c5cb-1a07-4941-885e-45dc50d15021\" (UID: \"9e10c5cb-1a07-4941-885e-45dc50d15021\") " Dec 15 05:53:07 crc kubenswrapper[4747]: I1215 05:53:07.536129 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e10c5cb-1a07-4941-885e-45dc50d15021-ovsdbserver-nb\") pod \"9e10c5cb-1a07-4941-885e-45dc50d15021\" (UID: \"9e10c5cb-1a07-4941-885e-45dc50d15021\") " Dec 15 05:53:07 crc kubenswrapper[4747]: I1215 05:53:07.536323 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9e10c5cb-1a07-4941-885e-45dc50d15021-dns-swift-storage-0\") pod \"9e10c5cb-1a07-4941-885e-45dc50d15021\" (UID: \"9e10c5cb-1a07-4941-885e-45dc50d15021\") " Dec 15 05:53:07 crc kubenswrapper[4747]: I1215 05:53:07.536363 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e10c5cb-1a07-4941-885e-45dc50d15021-ovsdbserver-sb\") pod \"9e10c5cb-1a07-4941-885e-45dc50d15021\" (UID: \"9e10c5cb-1a07-4941-885e-45dc50d15021\") " Dec 15 05:53:07 crc kubenswrapper[4747]: I1215 05:53:07.541659 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e10c5cb-1a07-4941-885e-45dc50d15021-kube-api-access-xt8tn" (OuterVolumeSpecName: "kube-api-access-xt8tn") pod "9e10c5cb-1a07-4941-885e-45dc50d15021" (UID: "9e10c5cb-1a07-4941-885e-45dc50d15021"). InnerVolumeSpecName "kube-api-access-xt8tn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:53:07 crc kubenswrapper[4747]: I1215 05:53:07.573023 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e10c5cb-1a07-4941-885e-45dc50d15021-config" (OuterVolumeSpecName: "config") pod "9e10c5cb-1a07-4941-885e-45dc50d15021" (UID: "9e10c5cb-1a07-4941-885e-45dc50d15021"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:53:07 crc kubenswrapper[4747]: I1215 05:53:07.573589 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e10c5cb-1a07-4941-885e-45dc50d15021-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9e10c5cb-1a07-4941-885e-45dc50d15021" (UID: "9e10c5cb-1a07-4941-885e-45dc50d15021"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:53:07 crc kubenswrapper[4747]: I1215 05:53:07.576872 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e10c5cb-1a07-4941-885e-45dc50d15021-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9e10c5cb-1a07-4941-885e-45dc50d15021" (UID: "9e10c5cb-1a07-4941-885e-45dc50d15021"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:53:07 crc kubenswrapper[4747]: I1215 05:53:07.578006 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e10c5cb-1a07-4941-885e-45dc50d15021-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9e10c5cb-1a07-4941-885e-45dc50d15021" (UID: "9e10c5cb-1a07-4941-885e-45dc50d15021"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:53:07 crc kubenswrapper[4747]: I1215 05:53:07.585124 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e10c5cb-1a07-4941-885e-45dc50d15021-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9e10c5cb-1a07-4941-885e-45dc50d15021" (UID: "9e10c5cb-1a07-4941-885e-45dc50d15021"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:53:07 crc kubenswrapper[4747]: I1215 05:53:07.639891 4747 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e10c5cb-1a07-4941-885e-45dc50d15021-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:07 crc kubenswrapper[4747]: I1215 05:53:07.639936 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xt8tn\" (UniqueName: \"kubernetes.io/projected/9e10c5cb-1a07-4941-885e-45dc50d15021-kube-api-access-xt8tn\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:07 crc kubenswrapper[4747]: I1215 05:53:07.639950 4747 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e10c5cb-1a07-4941-885e-45dc50d15021-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:07 crc kubenswrapper[4747]: I1215 05:53:07.639961 4747 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9e10c5cb-1a07-4941-885e-45dc50d15021-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:07 crc kubenswrapper[4747]: I1215 05:53:07.639973 4747 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e10c5cb-1a07-4941-885e-45dc50d15021-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:07 crc kubenswrapper[4747]: I1215 05:53:07.639982 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e10c5cb-1a07-4941-885e-45dc50d15021-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:08 crc kubenswrapper[4747]: I1215 05:53:08.452257 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-796db5f74c-84jzt" Dec 15 05:53:08 crc kubenswrapper[4747]: I1215 05:53:08.454728 4747 generic.go:334] "Generic (PLEG): container finished" podID="a54f13da-aab9-4190-8ddd-2537836ce0d9" containerID="2c5c210ad8b15bc7d9db0919194d25b20df26bad178aeaf1f86e34170c57693a" exitCode=0 Dec 15 05:53:08 crc kubenswrapper[4747]: I1215 05:53:08.454821 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a54f13da-aab9-4190-8ddd-2537836ce0d9","Type":"ContainerDied","Data":"2c5c210ad8b15bc7d9db0919194d25b20df26bad178aeaf1f86e34170c57693a"} Dec 15 05:53:08 crc kubenswrapper[4747]: I1215 05:53:08.485381 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-796db5f74c-84jzt"] Dec 15 05:53:08 crc kubenswrapper[4747]: I1215 05:53:08.494454 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-796db5f74c-84jzt"] Dec 15 05:53:08 crc kubenswrapper[4747]: I1215 05:53:08.640892 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e10c5cb-1a07-4941-885e-45dc50d15021" path="/var/lib/kubelet/pods/9e10c5cb-1a07-4941-885e-45dc50d15021/volumes" Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.346156 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.479282 4747 generic.go:334] "Generic (PLEG): container finished" podID="a54f13da-aab9-4190-8ddd-2537836ce0d9" containerID="b07fec422dd83412a0863895c073d1eef2f7ca8cecead159bef1784412b00d4d" exitCode=0 Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.479358 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a54f13da-aab9-4190-8ddd-2537836ce0d9","Type":"ContainerDied","Data":"b07fec422dd83412a0863895c073d1eef2f7ca8cecead159bef1784412b00d4d"} Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.479419 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.479453 4747 scope.go:117] "RemoveContainer" containerID="2c5c210ad8b15bc7d9db0919194d25b20df26bad178aeaf1f86e34170c57693a" Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.479434 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a54f13da-aab9-4190-8ddd-2537836ce0d9","Type":"ContainerDied","Data":"4e484b737cb3da489f4c7b8a5b8c52525f23c53970249191d9f5aab5ff98a9a5"} Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.499522 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a54f13da-aab9-4190-8ddd-2537836ce0d9-config-data\") pod \"a54f13da-aab9-4190-8ddd-2537836ce0d9\" (UID: \"a54f13da-aab9-4190-8ddd-2537836ce0d9\") " Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.499670 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a54f13da-aab9-4190-8ddd-2537836ce0d9-config-data-custom\") pod \"a54f13da-aab9-4190-8ddd-2537836ce0d9\" (UID: \"a54f13da-aab9-4190-8ddd-2537836ce0d9\") " Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.499708 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a54f13da-aab9-4190-8ddd-2537836ce0d9-scripts\") pod \"a54f13da-aab9-4190-8ddd-2537836ce0d9\" (UID: \"a54f13da-aab9-4190-8ddd-2537836ce0d9\") " Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.499739 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a54f13da-aab9-4190-8ddd-2537836ce0d9-etc-machine-id\") pod \"a54f13da-aab9-4190-8ddd-2537836ce0d9\" (UID: \"a54f13da-aab9-4190-8ddd-2537836ce0d9\") " Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.499808 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a54f13da-aab9-4190-8ddd-2537836ce0d9-combined-ca-bundle\") pod \"a54f13da-aab9-4190-8ddd-2537836ce0d9\" (UID: \"a54f13da-aab9-4190-8ddd-2537836ce0d9\") " Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.499846 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctbxc\" (UniqueName: \"kubernetes.io/projected/a54f13da-aab9-4190-8ddd-2537836ce0d9-kube-api-access-ctbxc\") pod \"a54f13da-aab9-4190-8ddd-2537836ce0d9\" (UID: \"a54f13da-aab9-4190-8ddd-2537836ce0d9\") " Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.500070 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a54f13da-aab9-4190-8ddd-2537836ce0d9-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a54f13da-aab9-4190-8ddd-2537836ce0d9" (UID: "a54f13da-aab9-4190-8ddd-2537836ce0d9"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.500611 4747 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a54f13da-aab9-4190-8ddd-2537836ce0d9-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.506014 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a54f13da-aab9-4190-8ddd-2537836ce0d9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a54f13da-aab9-4190-8ddd-2537836ce0d9" (UID: "a54f13da-aab9-4190-8ddd-2537836ce0d9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.506099 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a54f13da-aab9-4190-8ddd-2537836ce0d9-scripts" (OuterVolumeSpecName: "scripts") pod "a54f13da-aab9-4190-8ddd-2537836ce0d9" (UID: "a54f13da-aab9-4190-8ddd-2537836ce0d9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.506140 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a54f13da-aab9-4190-8ddd-2537836ce0d9-kube-api-access-ctbxc" (OuterVolumeSpecName: "kube-api-access-ctbxc") pod "a54f13da-aab9-4190-8ddd-2537836ce0d9" (UID: "a54f13da-aab9-4190-8ddd-2537836ce0d9"). InnerVolumeSpecName "kube-api-access-ctbxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.512939 4747 scope.go:117] "RemoveContainer" containerID="b07fec422dd83412a0863895c073d1eef2f7ca8cecead159bef1784412b00d4d" Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.544473 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a54f13da-aab9-4190-8ddd-2537836ce0d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a54f13da-aab9-4190-8ddd-2537836ce0d9" (UID: "a54f13da-aab9-4190-8ddd-2537836ce0d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.576896 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a54f13da-aab9-4190-8ddd-2537836ce0d9-config-data" (OuterVolumeSpecName: "config-data") pod "a54f13da-aab9-4190-8ddd-2537836ce0d9" (UID: "a54f13da-aab9-4190-8ddd-2537836ce0d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.589059 4747 scope.go:117] "RemoveContainer" containerID="2c5c210ad8b15bc7d9db0919194d25b20df26bad178aeaf1f86e34170c57693a" Dec 15 05:53:10 crc kubenswrapper[4747]: E1215 05:53:10.589577 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c5c210ad8b15bc7d9db0919194d25b20df26bad178aeaf1f86e34170c57693a\": container with ID starting with 2c5c210ad8b15bc7d9db0919194d25b20df26bad178aeaf1f86e34170c57693a not found: ID does not exist" containerID="2c5c210ad8b15bc7d9db0919194d25b20df26bad178aeaf1f86e34170c57693a" Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.589630 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c5c210ad8b15bc7d9db0919194d25b20df26bad178aeaf1f86e34170c57693a"} err="failed to get container status \"2c5c210ad8b15bc7d9db0919194d25b20df26bad178aeaf1f86e34170c57693a\": rpc error: code = NotFound desc = could not find container \"2c5c210ad8b15bc7d9db0919194d25b20df26bad178aeaf1f86e34170c57693a\": container with ID starting with 2c5c210ad8b15bc7d9db0919194d25b20df26bad178aeaf1f86e34170c57693a not found: ID does not exist" Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.589660 4747 scope.go:117] "RemoveContainer" containerID="b07fec422dd83412a0863895c073d1eef2f7ca8cecead159bef1784412b00d4d" Dec 15 05:53:10 crc kubenswrapper[4747]: E1215 05:53:10.590080 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b07fec422dd83412a0863895c073d1eef2f7ca8cecead159bef1784412b00d4d\": container with ID starting with b07fec422dd83412a0863895c073d1eef2f7ca8cecead159bef1784412b00d4d not found: ID does not exist" containerID="b07fec422dd83412a0863895c073d1eef2f7ca8cecead159bef1784412b00d4d" Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.590101 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b07fec422dd83412a0863895c073d1eef2f7ca8cecead159bef1784412b00d4d"} err="failed to get container status \"b07fec422dd83412a0863895c073d1eef2f7ca8cecead159bef1784412b00d4d\": rpc error: code = NotFound desc = could not find container \"b07fec422dd83412a0863895c073d1eef2f7ca8cecead159bef1784412b00d4d\": container with ID starting with b07fec422dd83412a0863895c073d1eef2f7ca8cecead159bef1784412b00d4d not found: ID does not exist" Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.603398 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a54f13da-aab9-4190-8ddd-2537836ce0d9-scripts\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.603430 4747 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a54f13da-aab9-4190-8ddd-2537836ce0d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.603445 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctbxc\" (UniqueName: \"kubernetes.io/projected/a54f13da-aab9-4190-8ddd-2537836ce0d9-kube-api-access-ctbxc\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.603457 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a54f13da-aab9-4190-8ddd-2537836ce0d9-config-data\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.603471 4747 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a54f13da-aab9-4190-8ddd-2537836ce0d9-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.801147 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.808847 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.819141 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 15 05:53:10 crc kubenswrapper[4747]: E1215 05:53:10.819487 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed94a5c7-82af-48dc-8592-440d39a321f7" containerName="barbican-api" Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.819507 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed94a5c7-82af-48dc-8592-440d39a321f7" containerName="barbican-api" Dec 15 05:53:10 crc kubenswrapper[4747]: E1215 05:53:10.819520 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e10c5cb-1a07-4941-885e-45dc50d15021" containerName="dnsmasq-dns" Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.819527 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e10c5cb-1a07-4941-885e-45dc50d15021" containerName="dnsmasq-dns" Dec 15 05:53:10 crc kubenswrapper[4747]: E1215 05:53:10.819546 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e10c5cb-1a07-4941-885e-45dc50d15021" containerName="init" Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.819552 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e10c5cb-1a07-4941-885e-45dc50d15021" containerName="init" Dec 15 05:53:10 crc kubenswrapper[4747]: E1215 05:53:10.819562 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed94a5c7-82af-48dc-8592-440d39a321f7" containerName="barbican-api-log" Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.819568 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed94a5c7-82af-48dc-8592-440d39a321f7" containerName="barbican-api-log" Dec 15 05:53:10 crc kubenswrapper[4747]: E1215 05:53:10.819579 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a54f13da-aab9-4190-8ddd-2537836ce0d9" containerName="cinder-scheduler" Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.819585 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="a54f13da-aab9-4190-8ddd-2537836ce0d9" containerName="cinder-scheduler" Dec 15 05:53:10 crc kubenswrapper[4747]: E1215 05:53:10.819603 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a54f13da-aab9-4190-8ddd-2537836ce0d9" containerName="probe" Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.819610 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="a54f13da-aab9-4190-8ddd-2537836ce0d9" containerName="probe" Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.819753 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="a54f13da-aab9-4190-8ddd-2537836ce0d9" containerName="cinder-scheduler" Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.819767 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed94a5c7-82af-48dc-8592-440d39a321f7" containerName="barbican-api" Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.819777 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e10c5cb-1a07-4941-885e-45dc50d15021" containerName="dnsmasq-dns" Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.819788 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="a54f13da-aab9-4190-8ddd-2537836ce0d9" containerName="probe" Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.819808 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed94a5c7-82af-48dc-8592-440d39a321f7" containerName="barbican-api-log" Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.820732 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.822255 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.837996 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.909291 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de848267-fecb-4856-98c8-e81c3cfbb156-scripts\") pod \"cinder-scheduler-0\" (UID: \"de848267-fecb-4856-98c8-e81c3cfbb156\") " pod="openstack/cinder-scheduler-0" Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.909358 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de848267-fecb-4856-98c8-e81c3cfbb156-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"de848267-fecb-4856-98c8-e81c3cfbb156\") " pod="openstack/cinder-scheduler-0" Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.909521 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8shhg\" (UniqueName: \"kubernetes.io/projected/de848267-fecb-4856-98c8-e81c3cfbb156-kube-api-access-8shhg\") pod \"cinder-scheduler-0\" (UID: \"de848267-fecb-4856-98c8-e81c3cfbb156\") " pod="openstack/cinder-scheduler-0" Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.909550 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de848267-fecb-4856-98c8-e81c3cfbb156-config-data\") pod \"cinder-scheduler-0\" (UID: \"de848267-fecb-4856-98c8-e81c3cfbb156\") " pod="openstack/cinder-scheduler-0" Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.909653 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de848267-fecb-4856-98c8-e81c3cfbb156-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"de848267-fecb-4856-98c8-e81c3cfbb156\") " pod="openstack/cinder-scheduler-0" Dec 15 05:53:10 crc kubenswrapper[4747]: I1215 05:53:10.909686 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/de848267-fecb-4856-98c8-e81c3cfbb156-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"de848267-fecb-4856-98c8-e81c3cfbb156\") " pod="openstack/cinder-scheduler-0" Dec 15 05:53:11 crc kubenswrapper[4747]: I1215 05:53:11.012457 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8shhg\" (UniqueName: \"kubernetes.io/projected/de848267-fecb-4856-98c8-e81c3cfbb156-kube-api-access-8shhg\") pod \"cinder-scheduler-0\" (UID: \"de848267-fecb-4856-98c8-e81c3cfbb156\") " pod="openstack/cinder-scheduler-0" Dec 15 05:53:11 crc kubenswrapper[4747]: I1215 05:53:11.012509 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de848267-fecb-4856-98c8-e81c3cfbb156-config-data\") pod \"cinder-scheduler-0\" (UID: \"de848267-fecb-4856-98c8-e81c3cfbb156\") " pod="openstack/cinder-scheduler-0" Dec 15 05:53:11 crc kubenswrapper[4747]: I1215 05:53:11.012572 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de848267-fecb-4856-98c8-e81c3cfbb156-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"de848267-fecb-4856-98c8-e81c3cfbb156\") " pod="openstack/cinder-scheduler-0" Dec 15 05:53:11 crc kubenswrapper[4747]: I1215 05:53:11.012594 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/de848267-fecb-4856-98c8-e81c3cfbb156-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"de848267-fecb-4856-98c8-e81c3cfbb156\") " pod="openstack/cinder-scheduler-0" Dec 15 05:53:11 crc kubenswrapper[4747]: I1215 05:53:11.012658 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de848267-fecb-4856-98c8-e81c3cfbb156-scripts\") pod \"cinder-scheduler-0\" (UID: \"de848267-fecb-4856-98c8-e81c3cfbb156\") " pod="openstack/cinder-scheduler-0" Dec 15 05:53:11 crc kubenswrapper[4747]: I1215 05:53:11.012720 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de848267-fecb-4856-98c8-e81c3cfbb156-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"de848267-fecb-4856-98c8-e81c3cfbb156\") " pod="openstack/cinder-scheduler-0" Dec 15 05:53:11 crc kubenswrapper[4747]: I1215 05:53:11.012804 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/de848267-fecb-4856-98c8-e81c3cfbb156-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"de848267-fecb-4856-98c8-e81c3cfbb156\") " pod="openstack/cinder-scheduler-0" Dec 15 05:53:11 crc kubenswrapper[4747]: I1215 05:53:11.017022 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de848267-fecb-4856-98c8-e81c3cfbb156-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"de848267-fecb-4856-98c8-e81c3cfbb156\") " pod="openstack/cinder-scheduler-0" Dec 15 05:53:11 crc kubenswrapper[4747]: I1215 05:53:11.017106 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de848267-fecb-4856-98c8-e81c3cfbb156-scripts\") pod \"cinder-scheduler-0\" (UID: \"de848267-fecb-4856-98c8-e81c3cfbb156\") " pod="openstack/cinder-scheduler-0" Dec 15 05:53:11 crc kubenswrapper[4747]: I1215 05:53:11.017287 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de848267-fecb-4856-98c8-e81c3cfbb156-config-data\") pod \"cinder-scheduler-0\" (UID: \"de848267-fecb-4856-98c8-e81c3cfbb156\") " pod="openstack/cinder-scheduler-0" Dec 15 05:53:11 crc kubenswrapper[4747]: I1215 05:53:11.018816 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de848267-fecb-4856-98c8-e81c3cfbb156-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"de848267-fecb-4856-98c8-e81c3cfbb156\") " pod="openstack/cinder-scheduler-0" Dec 15 05:53:11 crc kubenswrapper[4747]: I1215 05:53:11.030085 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8shhg\" (UniqueName: \"kubernetes.io/projected/de848267-fecb-4856-98c8-e81c3cfbb156-kube-api-access-8shhg\") pod \"cinder-scheduler-0\" (UID: \"de848267-fecb-4856-98c8-e81c3cfbb156\") " pod="openstack/cinder-scheduler-0" Dec 15 05:53:11 crc kubenswrapper[4747]: I1215 05:53:11.150763 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 15 05:53:11 crc kubenswrapper[4747]: I1215 05:53:11.596535 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 15 05:53:11 crc kubenswrapper[4747]: W1215 05:53:11.601394 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde848267_fecb_4856_98c8_e81c3cfbb156.slice/crio-aebeb3e882d725a5405c2bd1427aec26189a8ad20ba612546e8b97eacafbd831 WatchSource:0}: Error finding container aebeb3e882d725a5405c2bd1427aec26189a8ad20ba612546e8b97eacafbd831: Status 404 returned error can't find the container with id aebeb3e882d725a5405c2bd1427aec26189a8ad20ba612546e8b97eacafbd831 Dec 15 05:53:12 crc kubenswrapper[4747]: I1215 05:53:12.497510 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"de848267-fecb-4856-98c8-e81c3cfbb156","Type":"ContainerStarted","Data":"38c5d723225ba4bf9fc40c287128af4e4fa31d7a47f18d71ef87a86beba415d5"} Dec 15 05:53:12 crc kubenswrapper[4747]: I1215 05:53:12.497949 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"de848267-fecb-4856-98c8-e81c3cfbb156","Type":"ContainerStarted","Data":"aebeb3e882d725a5405c2bd1427aec26189a8ad20ba612546e8b97eacafbd831"} Dec 15 05:53:12 crc kubenswrapper[4747]: I1215 05:53:12.644378 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a54f13da-aab9-4190-8ddd-2537836ce0d9" path="/var/lib/kubelet/pods/a54f13da-aab9-4190-8ddd-2537836ce0d9/volumes" Dec 15 05:53:13 crc kubenswrapper[4747]: I1215 05:53:13.017723 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6b5fccc9fc-25v6s" Dec 15 05:53:13 crc kubenswrapper[4747]: I1215 05:53:13.397861 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 15 05:53:13 crc kubenswrapper[4747]: I1215 05:53:13.516524 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"de848267-fecb-4856-98c8-e81c3cfbb156","Type":"ContainerStarted","Data":"ba5bc1e09d79f6cb3c92e753f404daeeba5c9a042e1d47ccaecb77bcd714ba9d"} Dec 15 05:53:13 crc kubenswrapper[4747]: I1215 05:53:13.540843 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.540825629 podStartE2EDuration="3.540825629s" podCreationTimestamp="2025-12-15 05:53:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:53:13.533206491 +0000 UTC m=+957.229718408" watchObservedRunningTime="2025-12-15 05:53:13.540825629 +0000 UTC m=+957.237337547" Dec 15 05:53:16 crc kubenswrapper[4747]: I1215 05:53:16.025677 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 15 05:53:16 crc kubenswrapper[4747]: I1215 05:53:16.027246 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 15 05:53:16 crc kubenswrapper[4747]: I1215 05:53:16.029897 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Dec 15 05:53:16 crc kubenswrapper[4747]: I1215 05:53:16.032625 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-gkm8x" Dec 15 05:53:16 crc kubenswrapper[4747]: I1215 05:53:16.033126 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 15 05:53:16 crc kubenswrapper[4747]: I1215 05:53:16.042440 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Dec 15 05:53:16 crc kubenswrapper[4747]: I1215 05:53:16.151024 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 15 05:53:16 crc kubenswrapper[4747]: I1215 05:53:16.225893 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/03ee9ab5-c184-4473-ba41-5609f6aa29df-openstack-config\") pod \"openstackclient\" (UID: \"03ee9ab5-c184-4473-ba41-5609f6aa29df\") " pod="openstack/openstackclient" Dec 15 05:53:16 crc kubenswrapper[4747]: I1215 05:53:16.225966 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/03ee9ab5-c184-4473-ba41-5609f6aa29df-openstack-config-secret\") pod \"openstackclient\" (UID: \"03ee9ab5-c184-4473-ba41-5609f6aa29df\") " pod="openstack/openstackclient" Dec 15 05:53:16 crc kubenswrapper[4747]: I1215 05:53:16.226012 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg2pv\" (UniqueName: \"kubernetes.io/projected/03ee9ab5-c184-4473-ba41-5609f6aa29df-kube-api-access-sg2pv\") pod \"openstackclient\" (UID: \"03ee9ab5-c184-4473-ba41-5609f6aa29df\") " pod="openstack/openstackclient" Dec 15 05:53:16 crc kubenswrapper[4747]: I1215 05:53:16.226040 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03ee9ab5-c184-4473-ba41-5609f6aa29df-combined-ca-bundle\") pod \"openstackclient\" (UID: \"03ee9ab5-c184-4473-ba41-5609f6aa29df\") " pod="openstack/openstackclient" Dec 15 05:53:16 crc kubenswrapper[4747]: I1215 05:53:16.326866 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg2pv\" (UniqueName: \"kubernetes.io/projected/03ee9ab5-c184-4473-ba41-5609f6aa29df-kube-api-access-sg2pv\") pod \"openstackclient\" (UID: \"03ee9ab5-c184-4473-ba41-5609f6aa29df\") " pod="openstack/openstackclient" Dec 15 05:53:16 crc kubenswrapper[4747]: I1215 05:53:16.326910 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03ee9ab5-c184-4473-ba41-5609f6aa29df-combined-ca-bundle\") pod \"openstackclient\" (UID: \"03ee9ab5-c184-4473-ba41-5609f6aa29df\") " pod="openstack/openstackclient" Dec 15 05:53:16 crc kubenswrapper[4747]: I1215 05:53:16.327042 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/03ee9ab5-c184-4473-ba41-5609f6aa29df-openstack-config\") pod \"openstackclient\" (UID: \"03ee9ab5-c184-4473-ba41-5609f6aa29df\") " pod="openstack/openstackclient" Dec 15 05:53:16 crc kubenswrapper[4747]: I1215 05:53:16.327075 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/03ee9ab5-c184-4473-ba41-5609f6aa29df-openstack-config-secret\") pod \"openstackclient\" (UID: \"03ee9ab5-c184-4473-ba41-5609f6aa29df\") " pod="openstack/openstackclient" Dec 15 05:53:16 crc kubenswrapper[4747]: I1215 05:53:16.328795 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/03ee9ab5-c184-4473-ba41-5609f6aa29df-openstack-config\") pod \"openstackclient\" (UID: \"03ee9ab5-c184-4473-ba41-5609f6aa29df\") " pod="openstack/openstackclient" Dec 15 05:53:16 crc kubenswrapper[4747]: I1215 05:53:16.333361 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03ee9ab5-c184-4473-ba41-5609f6aa29df-combined-ca-bundle\") pod \"openstackclient\" (UID: \"03ee9ab5-c184-4473-ba41-5609f6aa29df\") " pod="openstack/openstackclient" Dec 15 05:53:16 crc kubenswrapper[4747]: I1215 05:53:16.342764 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/03ee9ab5-c184-4473-ba41-5609f6aa29df-openstack-config-secret\") pod \"openstackclient\" (UID: \"03ee9ab5-c184-4473-ba41-5609f6aa29df\") " pod="openstack/openstackclient" Dec 15 05:53:16 crc kubenswrapper[4747]: I1215 05:53:16.342969 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg2pv\" (UniqueName: \"kubernetes.io/projected/03ee9ab5-c184-4473-ba41-5609f6aa29df-kube-api-access-sg2pv\") pod \"openstackclient\" (UID: \"03ee9ab5-c184-4473-ba41-5609f6aa29df\") " pod="openstack/openstackclient" Dec 15 05:53:16 crc kubenswrapper[4747]: I1215 05:53:16.344962 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 15 05:53:16 crc kubenswrapper[4747]: I1215 05:53:16.786033 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 15 05:53:16 crc kubenswrapper[4747]: W1215 05:53:16.790035 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03ee9ab5_c184_4473_ba41_5609f6aa29df.slice/crio-7b274bf3fc8aee48f33cef5124a933ccaefb8c70b971fb7a9dc8bc9b78395cec WatchSource:0}: Error finding container 7b274bf3fc8aee48f33cef5124a933ccaefb8c70b971fb7a9dc8bc9b78395cec: Status 404 returned error can't find the container with id 7b274bf3fc8aee48f33cef5124a933ccaefb8c70b971fb7a9dc8bc9b78395cec Dec 15 05:53:17 crc kubenswrapper[4747]: I1215 05:53:17.567139 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"03ee9ab5-c184-4473-ba41-5609f6aa29df","Type":"ContainerStarted","Data":"7b274bf3fc8aee48f33cef5124a933ccaefb8c70b971fb7a9dc8bc9b78395cec"} Dec 15 05:53:20 crc kubenswrapper[4747]: I1215 05:53:20.370187 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-84688cc58c-2mrlh"] Dec 15 05:53:20 crc kubenswrapper[4747]: I1215 05:53:20.373298 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-84688cc58c-2mrlh" Dec 15 05:53:20 crc kubenswrapper[4747]: I1215 05:53:20.393718 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 15 05:53:20 crc kubenswrapper[4747]: I1215 05:53:20.394358 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Dec 15 05:53:20 crc kubenswrapper[4747]: I1215 05:53:20.397553 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Dec 15 05:53:20 crc kubenswrapper[4747]: I1215 05:53:20.412381 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-84688cc58c-2mrlh"] Dec 15 05:53:20 crc kubenswrapper[4747]: I1215 05:53:20.514606 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8-log-httpd\") pod \"swift-proxy-84688cc58c-2mrlh\" (UID: \"01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8\") " pod="openstack/swift-proxy-84688cc58c-2mrlh" Dec 15 05:53:20 crc kubenswrapper[4747]: I1215 05:53:20.514713 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8-internal-tls-certs\") pod \"swift-proxy-84688cc58c-2mrlh\" (UID: \"01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8\") " pod="openstack/swift-proxy-84688cc58c-2mrlh" Dec 15 05:53:20 crc kubenswrapper[4747]: I1215 05:53:20.514800 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czl22\" (UniqueName: \"kubernetes.io/projected/01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8-kube-api-access-czl22\") pod \"swift-proxy-84688cc58c-2mrlh\" (UID: \"01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8\") " pod="openstack/swift-proxy-84688cc58c-2mrlh" Dec 15 05:53:20 crc kubenswrapper[4747]: I1215 05:53:20.514957 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8-etc-swift\") pod \"swift-proxy-84688cc58c-2mrlh\" (UID: \"01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8\") " pod="openstack/swift-proxy-84688cc58c-2mrlh" Dec 15 05:53:20 crc kubenswrapper[4747]: I1215 05:53:20.515076 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8-public-tls-certs\") pod \"swift-proxy-84688cc58c-2mrlh\" (UID: \"01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8\") " pod="openstack/swift-proxy-84688cc58c-2mrlh" Dec 15 05:53:20 crc kubenswrapper[4747]: I1215 05:53:20.515336 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8-config-data\") pod \"swift-proxy-84688cc58c-2mrlh\" (UID: \"01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8\") " pod="openstack/swift-proxy-84688cc58c-2mrlh" Dec 15 05:53:20 crc kubenswrapper[4747]: I1215 05:53:20.515391 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8-run-httpd\") pod \"swift-proxy-84688cc58c-2mrlh\" (UID: \"01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8\") " pod="openstack/swift-proxy-84688cc58c-2mrlh" Dec 15 05:53:20 crc kubenswrapper[4747]: I1215 05:53:20.515704 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8-combined-ca-bundle\") pod \"swift-proxy-84688cc58c-2mrlh\" (UID: \"01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8\") " pod="openstack/swift-proxy-84688cc58c-2mrlh" Dec 15 05:53:20 crc kubenswrapper[4747]: I1215 05:53:20.618352 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8-etc-swift\") pod \"swift-proxy-84688cc58c-2mrlh\" (UID: \"01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8\") " pod="openstack/swift-proxy-84688cc58c-2mrlh" Dec 15 05:53:20 crc kubenswrapper[4747]: I1215 05:53:20.618516 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8-public-tls-certs\") pod \"swift-proxy-84688cc58c-2mrlh\" (UID: \"01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8\") " pod="openstack/swift-proxy-84688cc58c-2mrlh" Dec 15 05:53:20 crc kubenswrapper[4747]: I1215 05:53:20.618576 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8-config-data\") pod \"swift-proxy-84688cc58c-2mrlh\" (UID: \"01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8\") " pod="openstack/swift-proxy-84688cc58c-2mrlh" Dec 15 05:53:20 crc kubenswrapper[4747]: I1215 05:53:20.618691 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8-run-httpd\") pod \"swift-proxy-84688cc58c-2mrlh\" (UID: \"01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8\") " pod="openstack/swift-proxy-84688cc58c-2mrlh" Dec 15 05:53:20 crc kubenswrapper[4747]: I1215 05:53:20.619378 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8-run-httpd\") pod \"swift-proxy-84688cc58c-2mrlh\" (UID: \"01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8\") " pod="openstack/swift-proxy-84688cc58c-2mrlh" Dec 15 05:53:20 crc kubenswrapper[4747]: I1215 05:53:20.619402 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8-combined-ca-bundle\") pod \"swift-proxy-84688cc58c-2mrlh\" (UID: \"01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8\") " pod="openstack/swift-proxy-84688cc58c-2mrlh" Dec 15 05:53:20 crc kubenswrapper[4747]: I1215 05:53:20.619536 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8-log-httpd\") pod \"swift-proxy-84688cc58c-2mrlh\" (UID: \"01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8\") " pod="openstack/swift-proxy-84688cc58c-2mrlh" Dec 15 05:53:20 crc kubenswrapper[4747]: I1215 05:53:20.619600 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8-internal-tls-certs\") pod \"swift-proxy-84688cc58c-2mrlh\" (UID: \"01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8\") " pod="openstack/swift-proxy-84688cc58c-2mrlh" Dec 15 05:53:20 crc kubenswrapper[4747]: I1215 05:53:20.619660 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czl22\" (UniqueName: \"kubernetes.io/projected/01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8-kube-api-access-czl22\") pod \"swift-proxy-84688cc58c-2mrlh\" (UID: \"01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8\") " pod="openstack/swift-proxy-84688cc58c-2mrlh" Dec 15 05:53:20 crc kubenswrapper[4747]: I1215 05:53:20.620346 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8-log-httpd\") pod \"swift-proxy-84688cc58c-2mrlh\" (UID: \"01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8\") " pod="openstack/swift-proxy-84688cc58c-2mrlh" Dec 15 05:53:20 crc kubenswrapper[4747]: I1215 05:53:20.625386 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8-internal-tls-certs\") pod \"swift-proxy-84688cc58c-2mrlh\" (UID: \"01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8\") " pod="openstack/swift-proxy-84688cc58c-2mrlh" Dec 15 05:53:20 crc kubenswrapper[4747]: I1215 05:53:20.625544 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8-public-tls-certs\") pod \"swift-proxy-84688cc58c-2mrlh\" (UID: \"01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8\") " pod="openstack/swift-proxy-84688cc58c-2mrlh" Dec 15 05:53:20 crc kubenswrapper[4747]: I1215 05:53:20.631832 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8-etc-swift\") pod \"swift-proxy-84688cc58c-2mrlh\" (UID: \"01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8\") " pod="openstack/swift-proxy-84688cc58c-2mrlh" Dec 15 05:53:20 crc kubenswrapper[4747]: I1215 05:53:20.635017 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8-config-data\") pod \"swift-proxy-84688cc58c-2mrlh\" (UID: \"01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8\") " pod="openstack/swift-proxy-84688cc58c-2mrlh" Dec 15 05:53:20 crc kubenswrapper[4747]: I1215 05:53:20.639616 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czl22\" (UniqueName: \"kubernetes.io/projected/01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8-kube-api-access-czl22\") pod \"swift-proxy-84688cc58c-2mrlh\" (UID: \"01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8\") " pod="openstack/swift-proxy-84688cc58c-2mrlh" Dec 15 05:53:20 crc kubenswrapper[4747]: I1215 05:53:20.649390 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8-combined-ca-bundle\") pod \"swift-proxy-84688cc58c-2mrlh\" (UID: \"01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8\") " pod="openstack/swift-proxy-84688cc58c-2mrlh" Dec 15 05:53:20 crc kubenswrapper[4747]: I1215 05:53:20.714156 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-84688cc58c-2mrlh" Dec 15 05:53:21 crc kubenswrapper[4747]: I1215 05:53:21.316603 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-84688cc58c-2mrlh"] Dec 15 05:53:21 crc kubenswrapper[4747]: W1215 05:53:21.328671 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01f6a5bb_ed36_44f2_b5be_5d3b235ca4e8.slice/crio-302de7c74a4f0ba947396dd63e87ba3035fb440b1062f9284d42a5a22f7c07a4 WatchSource:0}: Error finding container 302de7c74a4f0ba947396dd63e87ba3035fb440b1062f9284d42a5a22f7c07a4: Status 404 returned error can't find the container with id 302de7c74a4f0ba947396dd63e87ba3035fb440b1062f9284d42a5a22f7c07a4 Dec 15 05:53:21 crc kubenswrapper[4747]: I1215 05:53:21.395145 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 15 05:53:21 crc kubenswrapper[4747]: I1215 05:53:21.622882 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-84688cc58c-2mrlh" event={"ID":"01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8","Type":"ContainerStarted","Data":"65ecc36cb7f15507faca9a8f99bdcf01520bb63a9e76ff8a690affc0868b3f15"} Dec 15 05:53:21 crc kubenswrapper[4747]: I1215 05:53:21.623187 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-84688cc58c-2mrlh" event={"ID":"01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8","Type":"ContainerStarted","Data":"302de7c74a4f0ba947396dd63e87ba3035fb440b1062f9284d42a5a22f7c07a4"} Dec 15 05:53:22 crc kubenswrapper[4747]: I1215 05:53:22.499342 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 15 05:53:22 crc kubenswrapper[4747]: I1215 05:53:22.499653 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bdd70cf4-833d-4f42-b330-18deb7418bb2" containerName="ceilometer-central-agent" containerID="cri-o://daa0ceba3762d60dee2c4ebe4af6ab63fae0d85ea55ae59ebaac0f4626051e66" gracePeriod=30 Dec 15 05:53:22 crc kubenswrapper[4747]: I1215 05:53:22.499812 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bdd70cf4-833d-4f42-b330-18deb7418bb2" containerName="proxy-httpd" containerID="cri-o://15be3e8623c592482be8fabb0d6cd06e494575327ab95ae3b63e7f5f28913c69" gracePeriod=30 Dec 15 05:53:22 crc kubenswrapper[4747]: I1215 05:53:22.499856 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bdd70cf4-833d-4f42-b330-18deb7418bb2" containerName="sg-core" containerID="cri-o://6b2524aa5d7017805bf20260665d8476711b068229db516c11f12fa0be79bee9" gracePeriod=30 Dec 15 05:53:22 crc kubenswrapper[4747]: I1215 05:53:22.499893 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bdd70cf4-833d-4f42-b330-18deb7418bb2" containerName="ceilometer-notification-agent" containerID="cri-o://fffdbea136fb9d8541a4d2f0b764d9a1c5a31445a85e79ae1c0e202bc75bb81a" gracePeriod=30 Dec 15 05:53:22 crc kubenswrapper[4747]: I1215 05:53:22.548428 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="bdd70cf4-833d-4f42-b330-18deb7418bb2" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.156:3000/\": EOF" Dec 15 05:53:22 crc kubenswrapper[4747]: I1215 05:53:22.652844 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-84688cc58c-2mrlh" event={"ID":"01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8","Type":"ContainerStarted","Data":"5666f5c3e87fc105722720533964acc17faeb59e28df30e58b378c03522048de"} Dec 15 05:53:22 crc kubenswrapper[4747]: I1215 05:53:22.652982 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-84688cc58c-2mrlh" Dec 15 05:53:22 crc kubenswrapper[4747]: I1215 05:53:22.677621 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-84688cc58c-2mrlh" podStartSLOduration=2.677604117 podStartE2EDuration="2.677604117s" podCreationTimestamp="2025-12-15 05:53:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:53:22.672609172 +0000 UTC m=+966.369121090" watchObservedRunningTime="2025-12-15 05:53:22.677604117 +0000 UTC m=+966.374116034" Dec 15 05:53:23 crc kubenswrapper[4747]: I1215 05:53:23.667386 4747 generic.go:334] "Generic (PLEG): container finished" podID="bdd70cf4-833d-4f42-b330-18deb7418bb2" containerID="15be3e8623c592482be8fabb0d6cd06e494575327ab95ae3b63e7f5f28913c69" exitCode=0 Dec 15 05:53:23 crc kubenswrapper[4747]: I1215 05:53:23.667787 4747 generic.go:334] "Generic (PLEG): container finished" podID="bdd70cf4-833d-4f42-b330-18deb7418bb2" containerID="6b2524aa5d7017805bf20260665d8476711b068229db516c11f12fa0be79bee9" exitCode=2 Dec 15 05:53:23 crc kubenswrapper[4747]: I1215 05:53:23.667800 4747 generic.go:334] "Generic (PLEG): container finished" podID="bdd70cf4-833d-4f42-b330-18deb7418bb2" containerID="daa0ceba3762d60dee2c4ebe4af6ab63fae0d85ea55ae59ebaac0f4626051e66" exitCode=0 Dec 15 05:53:23 crc kubenswrapper[4747]: I1215 05:53:23.667472 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bdd70cf4-833d-4f42-b330-18deb7418bb2","Type":"ContainerDied","Data":"15be3e8623c592482be8fabb0d6cd06e494575327ab95ae3b63e7f5f28913c69"} Dec 15 05:53:23 crc kubenswrapper[4747]: I1215 05:53:23.667907 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bdd70cf4-833d-4f42-b330-18deb7418bb2","Type":"ContainerDied","Data":"6b2524aa5d7017805bf20260665d8476711b068229db516c11f12fa0be79bee9"} Dec 15 05:53:23 crc kubenswrapper[4747]: I1215 05:53:23.668173 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-84688cc58c-2mrlh" Dec 15 05:53:23 crc kubenswrapper[4747]: I1215 05:53:23.668235 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bdd70cf4-833d-4f42-b330-18deb7418bb2","Type":"ContainerDied","Data":"daa0ceba3762d60dee2c4ebe4af6ab63fae0d85ea55ae59ebaac0f4626051e66"} Dec 15 05:53:24 crc kubenswrapper[4747]: I1215 05:53:24.899669 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="bdd70cf4-833d-4f42-b330-18deb7418bb2" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.156:3000/\": dial tcp 10.217.0.156:3000: connect: connection refused" Dec 15 05:53:26 crc kubenswrapper[4747]: I1215 05:53:26.695945 4747 generic.go:334] "Generic (PLEG): container finished" podID="bdd70cf4-833d-4f42-b330-18deb7418bb2" containerID="fffdbea136fb9d8541a4d2f0b764d9a1c5a31445a85e79ae1c0e202bc75bb81a" exitCode=0 Dec 15 05:53:26 crc kubenswrapper[4747]: I1215 05:53:26.696028 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bdd70cf4-833d-4f42-b330-18deb7418bb2","Type":"ContainerDied","Data":"fffdbea136fb9d8541a4d2f0b764d9a1c5a31445a85e79ae1c0e202bc75bb81a"} Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.254603 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.397584 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bdd70cf4-833d-4f42-b330-18deb7418bb2-sg-core-conf-yaml\") pod \"bdd70cf4-833d-4f42-b330-18deb7418bb2\" (UID: \"bdd70cf4-833d-4f42-b330-18deb7418bb2\") " Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.398118 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bdd70cf4-833d-4f42-b330-18deb7418bb2-log-httpd\") pod \"bdd70cf4-833d-4f42-b330-18deb7418bb2\" (UID: \"bdd70cf4-833d-4f42-b330-18deb7418bb2\") " Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.398195 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmv5q\" (UniqueName: \"kubernetes.io/projected/bdd70cf4-833d-4f42-b330-18deb7418bb2-kube-api-access-qmv5q\") pod \"bdd70cf4-833d-4f42-b330-18deb7418bb2\" (UID: \"bdd70cf4-833d-4f42-b330-18deb7418bb2\") " Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.398251 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bdd70cf4-833d-4f42-b330-18deb7418bb2-run-httpd\") pod \"bdd70cf4-833d-4f42-b330-18deb7418bb2\" (UID: \"bdd70cf4-833d-4f42-b330-18deb7418bb2\") " Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.398288 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdd70cf4-833d-4f42-b330-18deb7418bb2-config-data\") pod \"bdd70cf4-833d-4f42-b330-18deb7418bb2\" (UID: \"bdd70cf4-833d-4f42-b330-18deb7418bb2\") " Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.398313 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdd70cf4-833d-4f42-b330-18deb7418bb2-scripts\") pod \"bdd70cf4-833d-4f42-b330-18deb7418bb2\" (UID: \"bdd70cf4-833d-4f42-b330-18deb7418bb2\") " Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.398336 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdd70cf4-833d-4f42-b330-18deb7418bb2-combined-ca-bundle\") pod \"bdd70cf4-833d-4f42-b330-18deb7418bb2\" (UID: \"bdd70cf4-833d-4f42-b330-18deb7418bb2\") " Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.398890 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdd70cf4-833d-4f42-b330-18deb7418bb2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bdd70cf4-833d-4f42-b330-18deb7418bb2" (UID: "bdd70cf4-833d-4f42-b330-18deb7418bb2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.399301 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdd70cf4-833d-4f42-b330-18deb7418bb2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bdd70cf4-833d-4f42-b330-18deb7418bb2" (UID: "bdd70cf4-833d-4f42-b330-18deb7418bb2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.404043 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdd70cf4-833d-4f42-b330-18deb7418bb2-kube-api-access-qmv5q" (OuterVolumeSpecName: "kube-api-access-qmv5q") pod "bdd70cf4-833d-4f42-b330-18deb7418bb2" (UID: "bdd70cf4-833d-4f42-b330-18deb7418bb2"). InnerVolumeSpecName "kube-api-access-qmv5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.404433 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdd70cf4-833d-4f42-b330-18deb7418bb2-scripts" (OuterVolumeSpecName: "scripts") pod "bdd70cf4-833d-4f42-b330-18deb7418bb2" (UID: "bdd70cf4-833d-4f42-b330-18deb7418bb2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.428817 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdd70cf4-833d-4f42-b330-18deb7418bb2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bdd70cf4-833d-4f42-b330-18deb7418bb2" (UID: "bdd70cf4-833d-4f42-b330-18deb7418bb2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.471982 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdd70cf4-833d-4f42-b330-18deb7418bb2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bdd70cf4-833d-4f42-b330-18deb7418bb2" (UID: "bdd70cf4-833d-4f42-b330-18deb7418bb2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.486910 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdd70cf4-833d-4f42-b330-18deb7418bb2-config-data" (OuterVolumeSpecName: "config-data") pod "bdd70cf4-833d-4f42-b330-18deb7418bb2" (UID: "bdd70cf4-833d-4f42-b330-18deb7418bb2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.500447 4747 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bdd70cf4-833d-4f42-b330-18deb7418bb2-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.500476 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmv5q\" (UniqueName: \"kubernetes.io/projected/bdd70cf4-833d-4f42-b330-18deb7418bb2-kube-api-access-qmv5q\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.500489 4747 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bdd70cf4-833d-4f42-b330-18deb7418bb2-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.500500 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdd70cf4-833d-4f42-b330-18deb7418bb2-config-data\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.500510 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdd70cf4-833d-4f42-b330-18deb7418bb2-scripts\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.500522 4747 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdd70cf4-833d-4f42-b330-18deb7418bb2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.500534 4747 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bdd70cf4-833d-4f42-b330-18deb7418bb2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.720156 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bdd70cf4-833d-4f42-b330-18deb7418bb2","Type":"ContainerDied","Data":"ba9bace7ba6c981ebe08a0251f019e3057451985e4b451a5afa78a14b3b1abdc"} Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.720181 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.720528 4747 scope.go:117] "RemoveContainer" containerID="15be3e8623c592482be8fabb0d6cd06e494575327ab95ae3b63e7f5f28913c69" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.722706 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"03ee9ab5-c184-4473-ba41-5609f6aa29df","Type":"ContainerStarted","Data":"6e20b84bb1d653f559dfb99a4bdfb3bf5d9218c45607e2fa151d43be059c18c4"} Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.739684 4747 scope.go:117] "RemoveContainer" containerID="6b2524aa5d7017805bf20260665d8476711b068229db516c11f12fa0be79bee9" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.754533 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.590118833 podStartE2EDuration="13.754502585s" podCreationTimestamp="2025-12-15 05:53:15 +0000 UTC" firstStartedPulling="2025-12-15 05:53:16.793301878 +0000 UTC m=+960.489813786" lastFinishedPulling="2025-12-15 05:53:27.957685621 +0000 UTC m=+971.654197538" observedRunningTime="2025-12-15 05:53:28.748896522 +0000 UTC m=+972.445408440" watchObservedRunningTime="2025-12-15 05:53:28.754502585 +0000 UTC m=+972.451014503" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.763312 4747 scope.go:117] "RemoveContainer" containerID="fffdbea136fb9d8541a4d2f0b764d9a1c5a31445a85e79ae1c0e202bc75bb81a" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.774174 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.779074 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.785646 4747 scope.go:117] "RemoveContainer" containerID="daa0ceba3762d60dee2c4ebe4af6ab63fae0d85ea55ae59ebaac0f4626051e66" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.795602 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 15 05:53:28 crc kubenswrapper[4747]: E1215 05:53:28.796067 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdd70cf4-833d-4f42-b330-18deb7418bb2" containerName="ceilometer-central-agent" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.796086 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdd70cf4-833d-4f42-b330-18deb7418bb2" containerName="ceilometer-central-agent" Dec 15 05:53:28 crc kubenswrapper[4747]: E1215 05:53:28.796098 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdd70cf4-833d-4f42-b330-18deb7418bb2" containerName="ceilometer-notification-agent" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.796105 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdd70cf4-833d-4f42-b330-18deb7418bb2" containerName="ceilometer-notification-agent" Dec 15 05:53:28 crc kubenswrapper[4747]: E1215 05:53:28.796118 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdd70cf4-833d-4f42-b330-18deb7418bb2" containerName="sg-core" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.796124 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdd70cf4-833d-4f42-b330-18deb7418bb2" containerName="sg-core" Dec 15 05:53:28 crc kubenswrapper[4747]: E1215 05:53:28.796142 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdd70cf4-833d-4f42-b330-18deb7418bb2" containerName="proxy-httpd" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.796149 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdd70cf4-833d-4f42-b330-18deb7418bb2" containerName="proxy-httpd" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.796358 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdd70cf4-833d-4f42-b330-18deb7418bb2" containerName="sg-core" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.796373 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdd70cf4-833d-4f42-b330-18deb7418bb2" containerName="proxy-httpd" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.796383 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdd70cf4-833d-4f42-b330-18deb7418bb2" containerName="ceilometer-central-agent" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.796410 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdd70cf4-833d-4f42-b330-18deb7418bb2" containerName="ceilometer-notification-agent" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.798384 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.800304 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.800864 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.807888 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01dcd6dc-850f-4ecc-8804-b2998c4cc0ab-run-httpd\") pod \"ceilometer-0\" (UID: \"01dcd6dc-850f-4ecc-8804-b2998c4cc0ab\") " pod="openstack/ceilometer-0" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.807968 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01dcd6dc-850f-4ecc-8804-b2998c4cc0ab-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"01dcd6dc-850f-4ecc-8804-b2998c4cc0ab\") " pod="openstack/ceilometer-0" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.808023 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01dcd6dc-850f-4ecc-8804-b2998c4cc0ab-config-data\") pod \"ceilometer-0\" (UID: \"01dcd6dc-850f-4ecc-8804-b2998c4cc0ab\") " pod="openstack/ceilometer-0" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.808168 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01dcd6dc-850f-4ecc-8804-b2998c4cc0ab-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"01dcd6dc-850f-4ecc-8804-b2998c4cc0ab\") " pod="openstack/ceilometer-0" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.808195 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvtdb\" (UniqueName: \"kubernetes.io/projected/01dcd6dc-850f-4ecc-8804-b2998c4cc0ab-kube-api-access-qvtdb\") pod \"ceilometer-0\" (UID: \"01dcd6dc-850f-4ecc-8804-b2998c4cc0ab\") " pod="openstack/ceilometer-0" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.808285 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01dcd6dc-850f-4ecc-8804-b2998c4cc0ab-scripts\") pod \"ceilometer-0\" (UID: \"01dcd6dc-850f-4ecc-8804-b2998c4cc0ab\") " pod="openstack/ceilometer-0" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.808438 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01dcd6dc-850f-4ecc-8804-b2998c4cc0ab-log-httpd\") pod \"ceilometer-0\" (UID: \"01dcd6dc-850f-4ecc-8804-b2998c4cc0ab\") " pod="openstack/ceilometer-0" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.814135 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.911149 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01dcd6dc-850f-4ecc-8804-b2998c4cc0ab-log-httpd\") pod \"ceilometer-0\" (UID: \"01dcd6dc-850f-4ecc-8804-b2998c4cc0ab\") " pod="openstack/ceilometer-0" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.911241 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01dcd6dc-850f-4ecc-8804-b2998c4cc0ab-run-httpd\") pod \"ceilometer-0\" (UID: \"01dcd6dc-850f-4ecc-8804-b2998c4cc0ab\") " pod="openstack/ceilometer-0" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.911273 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01dcd6dc-850f-4ecc-8804-b2998c4cc0ab-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"01dcd6dc-850f-4ecc-8804-b2998c4cc0ab\") " pod="openstack/ceilometer-0" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.911313 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01dcd6dc-850f-4ecc-8804-b2998c4cc0ab-config-data\") pod \"ceilometer-0\" (UID: \"01dcd6dc-850f-4ecc-8804-b2998c4cc0ab\") " pod="openstack/ceilometer-0" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.911437 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01dcd6dc-850f-4ecc-8804-b2998c4cc0ab-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"01dcd6dc-850f-4ecc-8804-b2998c4cc0ab\") " pod="openstack/ceilometer-0" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.911471 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvtdb\" (UniqueName: \"kubernetes.io/projected/01dcd6dc-850f-4ecc-8804-b2998c4cc0ab-kube-api-access-qvtdb\") pod \"ceilometer-0\" (UID: \"01dcd6dc-850f-4ecc-8804-b2998c4cc0ab\") " pod="openstack/ceilometer-0" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.911540 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01dcd6dc-850f-4ecc-8804-b2998c4cc0ab-scripts\") pod \"ceilometer-0\" (UID: \"01dcd6dc-850f-4ecc-8804-b2998c4cc0ab\") " pod="openstack/ceilometer-0" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.913472 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01dcd6dc-850f-4ecc-8804-b2998c4cc0ab-log-httpd\") pod \"ceilometer-0\" (UID: \"01dcd6dc-850f-4ecc-8804-b2998c4cc0ab\") " pod="openstack/ceilometer-0" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.913536 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01dcd6dc-850f-4ecc-8804-b2998c4cc0ab-run-httpd\") pod \"ceilometer-0\" (UID: \"01dcd6dc-850f-4ecc-8804-b2998c4cc0ab\") " pod="openstack/ceilometer-0" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.925069 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01dcd6dc-850f-4ecc-8804-b2998c4cc0ab-scripts\") pod \"ceilometer-0\" (UID: \"01dcd6dc-850f-4ecc-8804-b2998c4cc0ab\") " pod="openstack/ceilometer-0" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.925593 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01dcd6dc-850f-4ecc-8804-b2998c4cc0ab-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"01dcd6dc-850f-4ecc-8804-b2998c4cc0ab\") " pod="openstack/ceilometer-0" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.925601 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01dcd6dc-850f-4ecc-8804-b2998c4cc0ab-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"01dcd6dc-850f-4ecc-8804-b2998c4cc0ab\") " pod="openstack/ceilometer-0" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.925896 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01dcd6dc-850f-4ecc-8804-b2998c4cc0ab-config-data\") pod \"ceilometer-0\" (UID: \"01dcd6dc-850f-4ecc-8804-b2998c4cc0ab\") " pod="openstack/ceilometer-0" Dec 15 05:53:28 crc kubenswrapper[4747]: I1215 05:53:28.932336 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvtdb\" (UniqueName: \"kubernetes.io/projected/01dcd6dc-850f-4ecc-8804-b2998c4cc0ab-kube-api-access-qvtdb\") pod \"ceilometer-0\" (UID: \"01dcd6dc-850f-4ecc-8804-b2998c4cc0ab\") " pod="openstack/ceilometer-0" Dec 15 05:53:29 crc kubenswrapper[4747]: I1215 05:53:29.122518 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 15 05:53:29 crc kubenswrapper[4747]: I1215 05:53:29.561627 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 15 05:53:29 crc kubenswrapper[4747]: I1215 05:53:29.735406 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01dcd6dc-850f-4ecc-8804-b2998c4cc0ab","Type":"ContainerStarted","Data":"38829fa44528d9797f3189b7bf573545748bfbd08c00d1dedf664bd2304b8899"} Dec 15 05:53:29 crc kubenswrapper[4747]: I1215 05:53:29.861103 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 15 05:53:30 crc kubenswrapper[4747]: I1215 05:53:30.602456 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 15 05:53:30 crc kubenswrapper[4747]: I1215 05:53:30.602961 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="025696d8-212d-4b2b-bff8-87abde7b3a0b" containerName="glance-log" containerID="cri-o://d991ef00573efcbdd8dd0d4b061264a9a748353813472f8e334fe651b644f113" gracePeriod=30 Dec 15 05:53:30 crc kubenswrapper[4747]: I1215 05:53:30.603071 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="025696d8-212d-4b2b-bff8-87abde7b3a0b" containerName="glance-httpd" containerID="cri-o://ac19bbda4e664a2896828a1315442fcde4e5ace0b7c0c5289e002c127d811368" gracePeriod=30 Dec 15 05:53:30 crc kubenswrapper[4747]: I1215 05:53:30.660111 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdd70cf4-833d-4f42-b330-18deb7418bb2" path="/var/lib/kubelet/pods/bdd70cf4-833d-4f42-b330-18deb7418bb2/volumes" Dec 15 05:53:30 crc kubenswrapper[4747]: I1215 05:53:30.723403 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-84688cc58c-2mrlh" Dec 15 05:53:30 crc kubenswrapper[4747]: I1215 05:53:30.730874 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-84688cc58c-2mrlh" Dec 15 05:53:30 crc kubenswrapper[4747]: I1215 05:53:30.765836 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01dcd6dc-850f-4ecc-8804-b2998c4cc0ab","Type":"ContainerStarted","Data":"092b17ebb6940478f5a6a209ea30bdb7b690dfcbdd47c24a52bed0febada9630"} Dec 15 05:53:30 crc kubenswrapper[4747]: I1215 05:53:30.774576 4747 generic.go:334] "Generic (PLEG): container finished" podID="025696d8-212d-4b2b-bff8-87abde7b3a0b" containerID="d991ef00573efcbdd8dd0d4b061264a9a748353813472f8e334fe651b644f113" exitCode=143 Dec 15 05:53:30 crc kubenswrapper[4747]: I1215 05:53:30.774769 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"025696d8-212d-4b2b-bff8-87abde7b3a0b","Type":"ContainerDied","Data":"d991ef00573efcbdd8dd0d4b061264a9a748353813472f8e334fe651b644f113"} Dec 15 05:53:31 crc kubenswrapper[4747]: I1215 05:53:31.795105 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01dcd6dc-850f-4ecc-8804-b2998c4cc0ab","Type":"ContainerStarted","Data":"04ea6ec9dee7d98840f079728a4a45058bbd6871c1aa613688cb2fab919a26f0"} Dec 15 05:53:32 crc kubenswrapper[4747]: I1215 05:53:32.808543 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01dcd6dc-850f-4ecc-8804-b2998c4cc0ab","Type":"ContainerStarted","Data":"060f2615c80f98f57a981ebf97753de94f57ff94cb5ebcca426cc96fa4420570"} Dec 15 05:53:33 crc kubenswrapper[4747]: I1215 05:53:33.461017 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 15 05:53:33 crc kubenswrapper[4747]: I1215 05:53:33.461558 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8" containerName="glance-log" containerID="cri-o://dc0b73a864df0515127d885874d7f1f5e1e02b0cecbde221190011f85b394b7d" gracePeriod=30 Dec 15 05:53:33 crc kubenswrapper[4747]: I1215 05:53:33.461645 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8" containerName="glance-httpd" containerID="cri-o://b8a96438134d446f04ed9de3b208f822638cde09ceca85794ae49b14c867a63e" gracePeriod=30 Dec 15 05:53:33 crc kubenswrapper[4747]: I1215 05:53:33.823775 4747 generic.go:334] "Generic (PLEG): container finished" podID="6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8" containerID="dc0b73a864df0515127d885874d7f1f5e1e02b0cecbde221190011f85b394b7d" exitCode=143 Dec 15 05:53:33 crc kubenswrapper[4747]: I1215 05:53:33.823825 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8","Type":"ContainerDied","Data":"dc0b73a864df0515127d885874d7f1f5e1e02b0cecbde221190011f85b394b7d"} Dec 15 05:53:33 crc kubenswrapper[4747]: I1215 05:53:33.826901 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01dcd6dc-850f-4ecc-8804-b2998c4cc0ab","Type":"ContainerStarted","Data":"528035e2e78bcc1014617960a1079b72ab0c9d3e54ac4cdd07d9c73bfd8948e8"} Dec 15 05:53:33 crc kubenswrapper[4747]: I1215 05:53:33.827071 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="01dcd6dc-850f-4ecc-8804-b2998c4cc0ab" containerName="ceilometer-central-agent" containerID="cri-o://092b17ebb6940478f5a6a209ea30bdb7b690dfcbdd47c24a52bed0febada9630" gracePeriod=30 Dec 15 05:53:33 crc kubenswrapper[4747]: I1215 05:53:33.827109 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="01dcd6dc-850f-4ecc-8804-b2998c4cc0ab" containerName="proxy-httpd" containerID="cri-o://528035e2e78bcc1014617960a1079b72ab0c9d3e54ac4cdd07d9c73bfd8948e8" gracePeriod=30 Dec 15 05:53:33 crc kubenswrapper[4747]: I1215 05:53:33.827130 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="01dcd6dc-850f-4ecc-8804-b2998c4cc0ab" containerName="sg-core" containerID="cri-o://060f2615c80f98f57a981ebf97753de94f57ff94cb5ebcca426cc96fa4420570" gracePeriod=30 Dec 15 05:53:33 crc kubenswrapper[4747]: I1215 05:53:33.827139 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="01dcd6dc-850f-4ecc-8804-b2998c4cc0ab" containerName="ceilometer-notification-agent" containerID="cri-o://04ea6ec9dee7d98840f079728a4a45058bbd6871c1aa613688cb2fab919a26f0" gracePeriod=30 Dec 15 05:53:33 crc kubenswrapper[4747]: I1215 05:53:33.827255 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 15 05:53:33 crc kubenswrapper[4747]: I1215 05:53:33.856583 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.808565676 podStartE2EDuration="5.85656315s" podCreationTimestamp="2025-12-15 05:53:28 +0000 UTC" firstStartedPulling="2025-12-15 05:53:29.566488306 +0000 UTC m=+973.263000223" lastFinishedPulling="2025-12-15 05:53:33.614485779 +0000 UTC m=+977.310997697" observedRunningTime="2025-12-15 05:53:33.850497133 +0000 UTC m=+977.547009050" watchObservedRunningTime="2025-12-15 05:53:33.85656315 +0000 UTC m=+977.553075066" Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.107265 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.219640 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/025696d8-212d-4b2b-bff8-87abde7b3a0b-public-tls-certs\") pod \"025696d8-212d-4b2b-bff8-87abde7b3a0b\" (UID: \"025696d8-212d-4b2b-bff8-87abde7b3a0b\") " Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.219690 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/025696d8-212d-4b2b-bff8-87abde7b3a0b-combined-ca-bundle\") pod \"025696d8-212d-4b2b-bff8-87abde7b3a0b\" (UID: \"025696d8-212d-4b2b-bff8-87abde7b3a0b\") " Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.219839 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/025696d8-212d-4b2b-bff8-87abde7b3a0b-httpd-run\") pod \"025696d8-212d-4b2b-bff8-87abde7b3a0b\" (UID: \"025696d8-212d-4b2b-bff8-87abde7b3a0b\") " Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.219866 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/025696d8-212d-4b2b-bff8-87abde7b3a0b-logs\") pod \"025696d8-212d-4b2b-bff8-87abde7b3a0b\" (UID: \"025696d8-212d-4b2b-bff8-87abde7b3a0b\") " Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.220349 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/025696d8-212d-4b2b-bff8-87abde7b3a0b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "025696d8-212d-4b2b-bff8-87abde7b3a0b" (UID: "025696d8-212d-4b2b-bff8-87abde7b3a0b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.220450 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/025696d8-212d-4b2b-bff8-87abde7b3a0b-scripts\") pod \"025696d8-212d-4b2b-bff8-87abde7b3a0b\" (UID: \"025696d8-212d-4b2b-bff8-87abde7b3a0b\") " Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.220508 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngrxf\" (UniqueName: \"kubernetes.io/projected/025696d8-212d-4b2b-bff8-87abde7b3a0b-kube-api-access-ngrxf\") pod \"025696d8-212d-4b2b-bff8-87abde7b3a0b\" (UID: \"025696d8-212d-4b2b-bff8-87abde7b3a0b\") " Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.220504 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/025696d8-212d-4b2b-bff8-87abde7b3a0b-logs" (OuterVolumeSpecName: "logs") pod "025696d8-212d-4b2b-bff8-87abde7b3a0b" (UID: "025696d8-212d-4b2b-bff8-87abde7b3a0b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.220555 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/025696d8-212d-4b2b-bff8-87abde7b3a0b-config-data\") pod \"025696d8-212d-4b2b-bff8-87abde7b3a0b\" (UID: \"025696d8-212d-4b2b-bff8-87abde7b3a0b\") " Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.220623 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"025696d8-212d-4b2b-bff8-87abde7b3a0b\" (UID: \"025696d8-212d-4b2b-bff8-87abde7b3a0b\") " Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.221500 4747 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/025696d8-212d-4b2b-bff8-87abde7b3a0b-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.221520 4747 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/025696d8-212d-4b2b-bff8-87abde7b3a0b-logs\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.224605 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/025696d8-212d-4b2b-bff8-87abde7b3a0b-scripts" (OuterVolumeSpecName: "scripts") pod "025696d8-212d-4b2b-bff8-87abde7b3a0b" (UID: "025696d8-212d-4b2b-bff8-87abde7b3a0b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.225264 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/025696d8-212d-4b2b-bff8-87abde7b3a0b-kube-api-access-ngrxf" (OuterVolumeSpecName: "kube-api-access-ngrxf") pod "025696d8-212d-4b2b-bff8-87abde7b3a0b" (UID: "025696d8-212d-4b2b-bff8-87abde7b3a0b"). InnerVolumeSpecName "kube-api-access-ngrxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.225447 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "025696d8-212d-4b2b-bff8-87abde7b3a0b" (UID: "025696d8-212d-4b2b-bff8-87abde7b3a0b"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.243361 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/025696d8-212d-4b2b-bff8-87abde7b3a0b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "025696d8-212d-4b2b-bff8-87abde7b3a0b" (UID: "025696d8-212d-4b2b-bff8-87abde7b3a0b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.259065 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/025696d8-212d-4b2b-bff8-87abde7b3a0b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "025696d8-212d-4b2b-bff8-87abde7b3a0b" (UID: "025696d8-212d-4b2b-bff8-87abde7b3a0b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.265029 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/025696d8-212d-4b2b-bff8-87abde7b3a0b-config-data" (OuterVolumeSpecName: "config-data") pod "025696d8-212d-4b2b-bff8-87abde7b3a0b" (UID: "025696d8-212d-4b2b-bff8-87abde7b3a0b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.322957 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/025696d8-212d-4b2b-bff8-87abde7b3a0b-scripts\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.323007 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngrxf\" (UniqueName: \"kubernetes.io/projected/025696d8-212d-4b2b-bff8-87abde7b3a0b-kube-api-access-ngrxf\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.323028 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/025696d8-212d-4b2b-bff8-87abde7b3a0b-config-data\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.323069 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.323080 4747 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/025696d8-212d-4b2b-bff8-87abde7b3a0b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.323091 4747 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/025696d8-212d-4b2b-bff8-87abde7b3a0b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.345817 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.427862 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.844804 4747 generic.go:334] "Generic (PLEG): container finished" podID="01dcd6dc-850f-4ecc-8804-b2998c4cc0ab" containerID="060f2615c80f98f57a981ebf97753de94f57ff94cb5ebcca426cc96fa4420570" exitCode=2 Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.844859 4747 generic.go:334] "Generic (PLEG): container finished" podID="01dcd6dc-850f-4ecc-8804-b2998c4cc0ab" containerID="04ea6ec9dee7d98840f079728a4a45058bbd6871c1aa613688cb2fab919a26f0" exitCode=0 Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.844913 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01dcd6dc-850f-4ecc-8804-b2998c4cc0ab","Type":"ContainerDied","Data":"060f2615c80f98f57a981ebf97753de94f57ff94cb5ebcca426cc96fa4420570"} Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.844986 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01dcd6dc-850f-4ecc-8804-b2998c4cc0ab","Type":"ContainerDied","Data":"04ea6ec9dee7d98840f079728a4a45058bbd6871c1aa613688cb2fab919a26f0"} Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.847253 4747 generic.go:334] "Generic (PLEG): container finished" podID="025696d8-212d-4b2b-bff8-87abde7b3a0b" containerID="ac19bbda4e664a2896828a1315442fcde4e5ace0b7c0c5289e002c127d811368" exitCode=0 Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.847281 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"025696d8-212d-4b2b-bff8-87abde7b3a0b","Type":"ContainerDied","Data":"ac19bbda4e664a2896828a1315442fcde4e5ace0b7c0c5289e002c127d811368"} Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.847303 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"025696d8-212d-4b2b-bff8-87abde7b3a0b","Type":"ContainerDied","Data":"856c16ed11c7910eaaedc1c5d99fbf7ec842a60f20cc53a89f41b6e37f1d335b"} Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.847326 4747 scope.go:117] "RemoveContainer" containerID="ac19bbda4e664a2896828a1315442fcde4e5ace0b7c0c5289e002c127d811368" Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.847402 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.880260 4747 scope.go:117] "RemoveContainer" containerID="d991ef00573efcbdd8dd0d4b061264a9a748353813472f8e334fe651b644f113" Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.888118 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.904295 4747 scope.go:117] "RemoveContainer" containerID="ac19bbda4e664a2896828a1315442fcde4e5ace0b7c0c5289e002c127d811368" Dec 15 05:53:34 crc kubenswrapper[4747]: E1215 05:53:34.904836 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac19bbda4e664a2896828a1315442fcde4e5ace0b7c0c5289e002c127d811368\": container with ID starting with ac19bbda4e664a2896828a1315442fcde4e5ace0b7c0c5289e002c127d811368 not found: ID does not exist" containerID="ac19bbda4e664a2896828a1315442fcde4e5ace0b7c0c5289e002c127d811368" Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.904874 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac19bbda4e664a2896828a1315442fcde4e5ace0b7c0c5289e002c127d811368"} err="failed to get container status \"ac19bbda4e664a2896828a1315442fcde4e5ace0b7c0c5289e002c127d811368\": rpc error: code = NotFound desc = could not find container \"ac19bbda4e664a2896828a1315442fcde4e5ace0b7c0c5289e002c127d811368\": container with ID starting with ac19bbda4e664a2896828a1315442fcde4e5ace0b7c0c5289e002c127d811368 not found: ID does not exist" Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.904901 4747 scope.go:117] "RemoveContainer" containerID="d991ef00573efcbdd8dd0d4b061264a9a748353813472f8e334fe651b644f113" Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.905133 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 15 05:53:34 crc kubenswrapper[4747]: E1215 05:53:34.905390 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d991ef00573efcbdd8dd0d4b061264a9a748353813472f8e334fe651b644f113\": container with ID starting with d991ef00573efcbdd8dd0d4b061264a9a748353813472f8e334fe651b644f113 not found: ID does not exist" containerID="d991ef00573efcbdd8dd0d4b061264a9a748353813472f8e334fe651b644f113" Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.905418 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d991ef00573efcbdd8dd0d4b061264a9a748353813472f8e334fe651b644f113"} err="failed to get container status \"d991ef00573efcbdd8dd0d4b061264a9a748353813472f8e334fe651b644f113\": rpc error: code = NotFound desc = could not find container \"d991ef00573efcbdd8dd0d4b061264a9a748353813472f8e334fe651b644f113\": container with ID starting with d991ef00573efcbdd8dd0d4b061264a9a748353813472f8e334fe651b644f113 not found: ID does not exist" Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.914125 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 15 05:53:34 crc kubenswrapper[4747]: E1215 05:53:34.914644 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="025696d8-212d-4b2b-bff8-87abde7b3a0b" containerName="glance-log" Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.914658 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="025696d8-212d-4b2b-bff8-87abde7b3a0b" containerName="glance-log" Dec 15 05:53:34 crc kubenswrapper[4747]: E1215 05:53:34.914713 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="025696d8-212d-4b2b-bff8-87abde7b3a0b" containerName="glance-httpd" Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.914720 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="025696d8-212d-4b2b-bff8-87abde7b3a0b" containerName="glance-httpd" Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.914953 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="025696d8-212d-4b2b-bff8-87abde7b3a0b" containerName="glance-log" Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.914968 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="025696d8-212d-4b2b-bff8-87abde7b3a0b" containerName="glance-httpd" Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.916083 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.921416 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.921465 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 15 05:53:34 crc kubenswrapper[4747]: I1215 05:53:34.925422 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 15 05:53:35 crc kubenswrapper[4747]: I1215 05:53:35.041816 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32596d05-cc4c-41f3-87b0-a69ff49aba9d-scripts\") pod \"glance-default-external-api-0\" (UID: \"32596d05-cc4c-41f3-87b0-a69ff49aba9d\") " pod="openstack/glance-default-external-api-0" Dec 15 05:53:35 crc kubenswrapper[4747]: I1215 05:53:35.041890 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/32596d05-cc4c-41f3-87b0-a69ff49aba9d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"32596d05-cc4c-41f3-87b0-a69ff49aba9d\") " pod="openstack/glance-default-external-api-0" Dec 15 05:53:35 crc kubenswrapper[4747]: I1215 05:53:35.041962 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32596d05-cc4c-41f3-87b0-a69ff49aba9d-logs\") pod \"glance-default-external-api-0\" (UID: \"32596d05-cc4c-41f3-87b0-a69ff49aba9d\") " pod="openstack/glance-default-external-api-0" Dec 15 05:53:35 crc kubenswrapper[4747]: I1215 05:53:35.041988 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32596d05-cc4c-41f3-87b0-a69ff49aba9d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"32596d05-cc4c-41f3-87b0-a69ff49aba9d\") " pod="openstack/glance-default-external-api-0" Dec 15 05:53:35 crc kubenswrapper[4747]: I1215 05:53:35.042020 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbmzd\" (UniqueName: \"kubernetes.io/projected/32596d05-cc4c-41f3-87b0-a69ff49aba9d-kube-api-access-jbmzd\") pod \"glance-default-external-api-0\" (UID: \"32596d05-cc4c-41f3-87b0-a69ff49aba9d\") " pod="openstack/glance-default-external-api-0" Dec 15 05:53:35 crc kubenswrapper[4747]: I1215 05:53:35.042213 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"32596d05-cc4c-41f3-87b0-a69ff49aba9d\") " pod="openstack/glance-default-external-api-0" Dec 15 05:53:35 crc kubenswrapper[4747]: I1215 05:53:35.042292 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/32596d05-cc4c-41f3-87b0-a69ff49aba9d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"32596d05-cc4c-41f3-87b0-a69ff49aba9d\") " pod="openstack/glance-default-external-api-0" Dec 15 05:53:35 crc kubenswrapper[4747]: I1215 05:53:35.042467 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32596d05-cc4c-41f3-87b0-a69ff49aba9d-config-data\") pod \"glance-default-external-api-0\" (UID: \"32596d05-cc4c-41f3-87b0-a69ff49aba9d\") " pod="openstack/glance-default-external-api-0" Dec 15 05:53:35 crc kubenswrapper[4747]: I1215 05:53:35.144252 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbmzd\" (UniqueName: \"kubernetes.io/projected/32596d05-cc4c-41f3-87b0-a69ff49aba9d-kube-api-access-jbmzd\") pod \"glance-default-external-api-0\" (UID: \"32596d05-cc4c-41f3-87b0-a69ff49aba9d\") " pod="openstack/glance-default-external-api-0" Dec 15 05:53:35 crc kubenswrapper[4747]: I1215 05:53:35.144305 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"32596d05-cc4c-41f3-87b0-a69ff49aba9d\") " pod="openstack/glance-default-external-api-0" Dec 15 05:53:35 crc kubenswrapper[4747]: I1215 05:53:35.144338 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/32596d05-cc4c-41f3-87b0-a69ff49aba9d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"32596d05-cc4c-41f3-87b0-a69ff49aba9d\") " pod="openstack/glance-default-external-api-0" Dec 15 05:53:35 crc kubenswrapper[4747]: I1215 05:53:35.144383 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32596d05-cc4c-41f3-87b0-a69ff49aba9d-config-data\") pod \"glance-default-external-api-0\" (UID: \"32596d05-cc4c-41f3-87b0-a69ff49aba9d\") " pod="openstack/glance-default-external-api-0" Dec 15 05:53:35 crc kubenswrapper[4747]: I1215 05:53:35.144462 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32596d05-cc4c-41f3-87b0-a69ff49aba9d-scripts\") pod \"glance-default-external-api-0\" (UID: \"32596d05-cc4c-41f3-87b0-a69ff49aba9d\") " pod="openstack/glance-default-external-api-0" Dec 15 05:53:35 crc kubenswrapper[4747]: I1215 05:53:35.144494 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/32596d05-cc4c-41f3-87b0-a69ff49aba9d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"32596d05-cc4c-41f3-87b0-a69ff49aba9d\") " pod="openstack/glance-default-external-api-0" Dec 15 05:53:35 crc kubenswrapper[4747]: I1215 05:53:35.144545 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32596d05-cc4c-41f3-87b0-a69ff49aba9d-logs\") pod \"glance-default-external-api-0\" (UID: \"32596d05-cc4c-41f3-87b0-a69ff49aba9d\") " pod="openstack/glance-default-external-api-0" Dec 15 05:53:35 crc kubenswrapper[4747]: I1215 05:53:35.144569 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32596d05-cc4c-41f3-87b0-a69ff49aba9d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"32596d05-cc4c-41f3-87b0-a69ff49aba9d\") " pod="openstack/glance-default-external-api-0" Dec 15 05:53:35 crc kubenswrapper[4747]: I1215 05:53:35.145298 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/32596d05-cc4c-41f3-87b0-a69ff49aba9d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"32596d05-cc4c-41f3-87b0-a69ff49aba9d\") " pod="openstack/glance-default-external-api-0" Dec 15 05:53:35 crc kubenswrapper[4747]: I1215 05:53:35.145345 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32596d05-cc4c-41f3-87b0-a69ff49aba9d-logs\") pod \"glance-default-external-api-0\" (UID: \"32596d05-cc4c-41f3-87b0-a69ff49aba9d\") " pod="openstack/glance-default-external-api-0" Dec 15 05:53:35 crc kubenswrapper[4747]: I1215 05:53:35.144645 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"32596d05-cc4c-41f3-87b0-a69ff49aba9d\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Dec 15 05:53:35 crc kubenswrapper[4747]: I1215 05:53:35.150122 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/32596d05-cc4c-41f3-87b0-a69ff49aba9d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"32596d05-cc4c-41f3-87b0-a69ff49aba9d\") " pod="openstack/glance-default-external-api-0" Dec 15 05:53:35 crc kubenswrapper[4747]: I1215 05:53:35.150605 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32596d05-cc4c-41f3-87b0-a69ff49aba9d-scripts\") pod \"glance-default-external-api-0\" (UID: \"32596d05-cc4c-41f3-87b0-a69ff49aba9d\") " pod="openstack/glance-default-external-api-0" Dec 15 05:53:35 crc kubenswrapper[4747]: I1215 05:53:35.152618 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32596d05-cc4c-41f3-87b0-a69ff49aba9d-config-data\") pod \"glance-default-external-api-0\" (UID: \"32596d05-cc4c-41f3-87b0-a69ff49aba9d\") " pod="openstack/glance-default-external-api-0" Dec 15 05:53:35 crc kubenswrapper[4747]: I1215 05:53:35.153651 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32596d05-cc4c-41f3-87b0-a69ff49aba9d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"32596d05-cc4c-41f3-87b0-a69ff49aba9d\") " pod="openstack/glance-default-external-api-0" Dec 15 05:53:35 crc kubenswrapper[4747]: I1215 05:53:35.168895 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbmzd\" (UniqueName: \"kubernetes.io/projected/32596d05-cc4c-41f3-87b0-a69ff49aba9d-kube-api-access-jbmzd\") pod \"glance-default-external-api-0\" (UID: \"32596d05-cc4c-41f3-87b0-a69ff49aba9d\") " pod="openstack/glance-default-external-api-0" Dec 15 05:53:35 crc kubenswrapper[4747]: I1215 05:53:35.169685 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"32596d05-cc4c-41f3-87b0-a69ff49aba9d\") " pod="openstack/glance-default-external-api-0" Dec 15 05:53:35 crc kubenswrapper[4747]: I1215 05:53:35.236404 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 15 05:53:35 crc kubenswrapper[4747]: I1215 05:53:35.712608 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 15 05:53:35 crc kubenswrapper[4747]: W1215 05:53:35.721975 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32596d05_cc4c_41f3_87b0_a69ff49aba9d.slice/crio-ba545335b169d5922c5e974ced039bb830fe4441a4c4837e484be90e6f779370 WatchSource:0}: Error finding container ba545335b169d5922c5e974ced039bb830fe4441a4c4837e484be90e6f779370: Status 404 returned error can't find the container with id ba545335b169d5922c5e974ced039bb830fe4441a4c4837e484be90e6f779370 Dec 15 05:53:35 crc kubenswrapper[4747]: I1215 05:53:35.858658 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"32596d05-cc4c-41f3-87b0-a69ff49aba9d","Type":"ContainerStarted","Data":"ba545335b169d5922c5e974ced039bb830fe4441a4c4837e484be90e6f779370"} Dec 15 05:53:36 crc kubenswrapper[4747]: I1215 05:53:36.642841 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="025696d8-212d-4b2b-bff8-87abde7b3a0b" path="/var/lib/kubelet/pods/025696d8-212d-4b2b-bff8-87abde7b3a0b/volumes" Dec 15 05:53:36 crc kubenswrapper[4747]: I1215 05:53:36.871334 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"32596d05-cc4c-41f3-87b0-a69ff49aba9d","Type":"ContainerStarted","Data":"8e1d662fda3aeb6711011088291bca9cbaea3258432a608ab9d425977994d745"} Dec 15 05:53:36 crc kubenswrapper[4747]: I1215 05:53:36.871585 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"32596d05-cc4c-41f3-87b0-a69ff49aba9d","Type":"ContainerStarted","Data":"7a77846b4a57b472216097a06b8c31f683508a0ee1354095159830cc690e647a"} Dec 15 05:53:36 crc kubenswrapper[4747]: I1215 05:53:36.876139 4747 generic.go:334] "Generic (PLEG): container finished" podID="6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8" containerID="b8a96438134d446f04ed9de3b208f822638cde09ceca85794ae49b14c867a63e" exitCode=0 Dec 15 05:53:36 crc kubenswrapper[4747]: I1215 05:53:36.876185 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8","Type":"ContainerDied","Data":"b8a96438134d446f04ed9de3b208f822638cde09ceca85794ae49b14c867a63e"} Dec 15 05:53:37 crc kubenswrapper[4747]: I1215 05:53:37.046285 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 15 05:53:37 crc kubenswrapper[4747]: I1215 05:53:37.087717 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.087696273 podStartE2EDuration="3.087696273s" podCreationTimestamp="2025-12-15 05:53:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:53:36.895267196 +0000 UTC m=+980.591779113" watchObservedRunningTime="2025-12-15 05:53:37.087696273 +0000 UTC m=+980.784208190" Dec 15 05:53:37 crc kubenswrapper[4747]: I1215 05:53:37.198843 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8-logs\") pod \"6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8\" (UID: \"6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8\") " Dec 15 05:53:37 crc kubenswrapper[4747]: I1215 05:53:37.199076 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8\" (UID: \"6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8\") " Dec 15 05:53:37 crc kubenswrapper[4747]: I1215 05:53:37.199106 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bp2n9\" (UniqueName: \"kubernetes.io/projected/6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8-kube-api-access-bp2n9\") pod \"6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8\" (UID: \"6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8\") " Dec 15 05:53:37 crc kubenswrapper[4747]: I1215 05:53:37.199157 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8-config-data\") pod \"6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8\" (UID: \"6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8\") " Dec 15 05:53:37 crc kubenswrapper[4747]: I1215 05:53:37.199196 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8-combined-ca-bundle\") pod \"6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8\" (UID: \"6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8\") " Dec 15 05:53:37 crc kubenswrapper[4747]: I1215 05:53:37.199258 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8-internal-tls-certs\") pod \"6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8\" (UID: \"6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8\") " Dec 15 05:53:37 crc kubenswrapper[4747]: I1215 05:53:37.199305 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8-scripts\") pod \"6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8\" (UID: \"6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8\") " Dec 15 05:53:37 crc kubenswrapper[4747]: I1215 05:53:37.199322 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8-httpd-run\") pod \"6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8\" (UID: \"6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8\") " Dec 15 05:53:37 crc kubenswrapper[4747]: I1215 05:53:37.199507 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8-logs" (OuterVolumeSpecName: "logs") pod "6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8" (UID: "6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:53:37 crc kubenswrapper[4747]: I1215 05:53:37.200366 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8" (UID: "6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:53:37 crc kubenswrapper[4747]: I1215 05:53:37.206368 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8-scripts" (OuterVolumeSpecName: "scripts") pod "6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8" (UID: "6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:53:37 crc kubenswrapper[4747]: I1215 05:53:37.207058 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8" (UID: "6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 15 05:53:37 crc kubenswrapper[4747]: I1215 05:53:37.221561 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8-kube-api-access-bp2n9" (OuterVolumeSpecName: "kube-api-access-bp2n9") pod "6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8" (UID: "6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8"). InnerVolumeSpecName "kube-api-access-bp2n9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:53:37 crc kubenswrapper[4747]: I1215 05:53:37.225343 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8" (UID: "6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:53:37 crc kubenswrapper[4747]: I1215 05:53:37.245796 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8-config-data" (OuterVolumeSpecName: "config-data") pod "6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8" (UID: "6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:53:37 crc kubenswrapper[4747]: I1215 05:53:37.248694 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8" (UID: "6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:53:37 crc kubenswrapper[4747]: I1215 05:53:37.302302 4747 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:37 crc kubenswrapper[4747]: I1215 05:53:37.302333 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8-scripts\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:37 crc kubenswrapper[4747]: I1215 05:53:37.302344 4747 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:37 crc kubenswrapper[4747]: I1215 05:53:37.302355 4747 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8-logs\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:37 crc kubenswrapper[4747]: I1215 05:53:37.302379 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Dec 15 05:53:37 crc kubenswrapper[4747]: I1215 05:53:37.302389 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bp2n9\" (UniqueName: \"kubernetes.io/projected/6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8-kube-api-access-bp2n9\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:37 crc kubenswrapper[4747]: I1215 05:53:37.302400 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8-config-data\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:37 crc kubenswrapper[4747]: I1215 05:53:37.302408 4747 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:37 crc kubenswrapper[4747]: I1215 05:53:37.318508 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Dec 15 05:53:37 crc kubenswrapper[4747]: I1215 05:53:37.404861 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:37 crc kubenswrapper[4747]: I1215 05:53:37.884967 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8","Type":"ContainerDied","Data":"d6c040abb4a985552a3d46a35c9bcff5c4c3c8fe2fed47690586d0a179ff3a7e"} Dec 15 05:53:37 crc kubenswrapper[4747]: I1215 05:53:37.885841 4747 scope.go:117] "RemoveContainer" containerID="b8a96438134d446f04ed9de3b208f822638cde09ceca85794ae49b14c867a63e" Dec 15 05:53:37 crc kubenswrapper[4747]: I1215 05:53:37.886082 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 15 05:53:37 crc kubenswrapper[4747]: I1215 05:53:37.890088 4747 generic.go:334] "Generic (PLEG): container finished" podID="01dcd6dc-850f-4ecc-8804-b2998c4cc0ab" containerID="092b17ebb6940478f5a6a209ea30bdb7b690dfcbdd47c24a52bed0febada9630" exitCode=0 Dec 15 05:53:37 crc kubenswrapper[4747]: I1215 05:53:37.890166 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01dcd6dc-850f-4ecc-8804-b2998c4cc0ab","Type":"ContainerDied","Data":"092b17ebb6940478f5a6a209ea30bdb7b690dfcbdd47c24a52bed0febada9630"} Dec 15 05:53:37 crc kubenswrapper[4747]: I1215 05:53:37.916492 4747 scope.go:117] "RemoveContainer" containerID="dc0b73a864df0515127d885874d7f1f5e1e02b0cecbde221190011f85b394b7d" Dec 15 05:53:37 crc kubenswrapper[4747]: I1215 05:53:37.920676 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 15 05:53:37 crc kubenswrapper[4747]: I1215 05:53:37.925890 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 15 05:53:37 crc kubenswrapper[4747]: I1215 05:53:37.943454 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 15 05:53:37 crc kubenswrapper[4747]: E1215 05:53:37.943887 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8" containerName="glance-httpd" Dec 15 05:53:37 crc kubenswrapper[4747]: I1215 05:53:37.943911 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8" containerName="glance-httpd" Dec 15 05:53:37 crc kubenswrapper[4747]: E1215 05:53:37.943990 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8" containerName="glance-log" Dec 15 05:53:37 crc kubenswrapper[4747]: I1215 05:53:37.943999 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8" containerName="glance-log" Dec 15 05:53:37 crc kubenswrapper[4747]: I1215 05:53:37.944257 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8" containerName="glance-httpd" Dec 15 05:53:37 crc kubenswrapper[4747]: I1215 05:53:37.944289 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8" containerName="glance-log" Dec 15 05:53:37 crc kubenswrapper[4747]: I1215 05:53:37.945379 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 15 05:53:37 crc kubenswrapper[4747]: I1215 05:53:37.947286 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 15 05:53:37 crc kubenswrapper[4747]: I1215 05:53:37.947319 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 15 05:53:37 crc kubenswrapper[4747]: I1215 05:53:37.958971 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 15 05:53:38 crc kubenswrapper[4747]: I1215 05:53:38.120282 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89218c2b-2e98-43cc-a4b4-3e741773bfb8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"89218c2b-2e98-43cc-a4b4-3e741773bfb8\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:53:38 crc kubenswrapper[4747]: I1215 05:53:38.120418 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r665p\" (UniqueName: \"kubernetes.io/projected/89218c2b-2e98-43cc-a4b4-3e741773bfb8-kube-api-access-r665p\") pod \"glance-default-internal-api-0\" (UID: \"89218c2b-2e98-43cc-a4b4-3e741773bfb8\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:53:38 crc kubenswrapper[4747]: I1215 05:53:38.120530 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/89218c2b-2e98-43cc-a4b4-3e741773bfb8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"89218c2b-2e98-43cc-a4b4-3e741773bfb8\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:53:38 crc kubenswrapper[4747]: I1215 05:53:38.120583 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89218c2b-2e98-43cc-a4b4-3e741773bfb8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"89218c2b-2e98-43cc-a4b4-3e741773bfb8\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:53:38 crc kubenswrapper[4747]: I1215 05:53:38.120619 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89218c2b-2e98-43cc-a4b4-3e741773bfb8-logs\") pod \"glance-default-internal-api-0\" (UID: \"89218c2b-2e98-43cc-a4b4-3e741773bfb8\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:53:38 crc kubenswrapper[4747]: I1215 05:53:38.120664 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89218c2b-2e98-43cc-a4b4-3e741773bfb8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"89218c2b-2e98-43cc-a4b4-3e741773bfb8\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:53:38 crc kubenswrapper[4747]: I1215 05:53:38.120728 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"89218c2b-2e98-43cc-a4b4-3e741773bfb8\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:53:38 crc kubenswrapper[4747]: I1215 05:53:38.120751 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89218c2b-2e98-43cc-a4b4-3e741773bfb8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"89218c2b-2e98-43cc-a4b4-3e741773bfb8\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:53:38 crc kubenswrapper[4747]: I1215 05:53:38.222232 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/89218c2b-2e98-43cc-a4b4-3e741773bfb8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"89218c2b-2e98-43cc-a4b4-3e741773bfb8\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:53:38 crc kubenswrapper[4747]: I1215 05:53:38.222496 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89218c2b-2e98-43cc-a4b4-3e741773bfb8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"89218c2b-2e98-43cc-a4b4-3e741773bfb8\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:53:38 crc kubenswrapper[4747]: I1215 05:53:38.222533 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89218c2b-2e98-43cc-a4b4-3e741773bfb8-logs\") pod \"glance-default-internal-api-0\" (UID: \"89218c2b-2e98-43cc-a4b4-3e741773bfb8\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:53:38 crc kubenswrapper[4747]: I1215 05:53:38.222552 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89218c2b-2e98-43cc-a4b4-3e741773bfb8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"89218c2b-2e98-43cc-a4b4-3e741773bfb8\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:53:38 crc kubenswrapper[4747]: I1215 05:53:38.223025 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89218c2b-2e98-43cc-a4b4-3e741773bfb8-logs\") pod \"glance-default-internal-api-0\" (UID: \"89218c2b-2e98-43cc-a4b4-3e741773bfb8\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:53:38 crc kubenswrapper[4747]: I1215 05:53:38.223088 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"89218c2b-2e98-43cc-a4b4-3e741773bfb8\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:53:38 crc kubenswrapper[4747]: I1215 05:53:38.223362 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"89218c2b-2e98-43cc-a4b4-3e741773bfb8\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Dec 15 05:53:38 crc kubenswrapper[4747]: I1215 05:53:38.223687 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89218c2b-2e98-43cc-a4b4-3e741773bfb8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"89218c2b-2e98-43cc-a4b4-3e741773bfb8\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:53:38 crc kubenswrapper[4747]: I1215 05:53:38.223123 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89218c2b-2e98-43cc-a4b4-3e741773bfb8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"89218c2b-2e98-43cc-a4b4-3e741773bfb8\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:53:38 crc kubenswrapper[4747]: I1215 05:53:38.225080 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89218c2b-2e98-43cc-a4b4-3e741773bfb8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"89218c2b-2e98-43cc-a4b4-3e741773bfb8\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:53:38 crc kubenswrapper[4747]: I1215 05:53:38.225147 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r665p\" (UniqueName: \"kubernetes.io/projected/89218c2b-2e98-43cc-a4b4-3e741773bfb8-kube-api-access-r665p\") pod \"glance-default-internal-api-0\" (UID: \"89218c2b-2e98-43cc-a4b4-3e741773bfb8\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:53:38 crc kubenswrapper[4747]: I1215 05:53:38.227806 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/89218c2b-2e98-43cc-a4b4-3e741773bfb8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"89218c2b-2e98-43cc-a4b4-3e741773bfb8\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:53:38 crc kubenswrapper[4747]: I1215 05:53:38.228107 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89218c2b-2e98-43cc-a4b4-3e741773bfb8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"89218c2b-2e98-43cc-a4b4-3e741773bfb8\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:53:38 crc kubenswrapper[4747]: I1215 05:53:38.228650 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89218c2b-2e98-43cc-a4b4-3e741773bfb8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"89218c2b-2e98-43cc-a4b4-3e741773bfb8\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:53:38 crc kubenswrapper[4747]: I1215 05:53:38.237130 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89218c2b-2e98-43cc-a4b4-3e741773bfb8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"89218c2b-2e98-43cc-a4b4-3e741773bfb8\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:53:38 crc kubenswrapper[4747]: I1215 05:53:38.240213 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r665p\" (UniqueName: \"kubernetes.io/projected/89218c2b-2e98-43cc-a4b4-3e741773bfb8-kube-api-access-r665p\") pod \"glance-default-internal-api-0\" (UID: \"89218c2b-2e98-43cc-a4b4-3e741773bfb8\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:53:38 crc kubenswrapper[4747]: I1215 05:53:38.247296 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"89218c2b-2e98-43cc-a4b4-3e741773bfb8\") " pod="openstack/glance-default-internal-api-0" Dec 15 05:53:38 crc kubenswrapper[4747]: I1215 05:53:38.258863 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 15 05:53:38 crc kubenswrapper[4747]: I1215 05:53:38.642887 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8" path="/var/lib/kubelet/pods/6b63c9a1-c4cd-4e20-a5fc-cabe6bb190e8/volumes" Dec 15 05:53:38 crc kubenswrapper[4747]: W1215 05:53:38.771081 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89218c2b_2e98_43cc_a4b4_3e741773bfb8.slice/crio-515a4a1e7cb5b9d0fc5f4f4f4738926e8027135507c5f25c3ecef81574bd66d1 WatchSource:0}: Error finding container 515a4a1e7cb5b9d0fc5f4f4f4738926e8027135507c5f25c3ecef81574bd66d1: Status 404 returned error can't find the container with id 515a4a1e7cb5b9d0fc5f4f4f4738926e8027135507c5f25c3ecef81574bd66d1 Dec 15 05:53:38 crc kubenswrapper[4747]: I1215 05:53:38.774439 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 15 05:53:38 crc kubenswrapper[4747]: I1215 05:53:38.923451 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"89218c2b-2e98-43cc-a4b4-3e741773bfb8","Type":"ContainerStarted","Data":"515a4a1e7cb5b9d0fc5f4f4f4738926e8027135507c5f25c3ecef81574bd66d1"} Dec 15 05:53:39 crc kubenswrapper[4747]: I1215 05:53:39.933872 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"89218c2b-2e98-43cc-a4b4-3e741773bfb8","Type":"ContainerStarted","Data":"54df87bc289fd26714eef82ae41ef05d56944751173af943d75e76e811afa349"} Dec 15 05:53:39 crc kubenswrapper[4747]: I1215 05:53:39.934658 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"89218c2b-2e98-43cc-a4b4-3e741773bfb8","Type":"ContainerStarted","Data":"ae5730fd3fe269d9e4d29d3a8893484d81d70845c5d413918513e23e6a830875"} Dec 15 05:53:39 crc kubenswrapper[4747]: I1215 05:53:39.958423 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=2.958407545 podStartE2EDuration="2.958407545s" podCreationTimestamp="2025-12-15 05:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:53:39.952439982 +0000 UTC m=+983.648951900" watchObservedRunningTime="2025-12-15 05:53:39.958407545 +0000 UTC m=+983.654919462" Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.238778 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.239538 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.279904 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.291531 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.337305 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-5blmr"] Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.338399 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-5blmr" Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.347536 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-5blmr"] Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.428433 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-9pfvt"] Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.429430 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-9pfvt" Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.440602 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-9pfvt"] Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.445488 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-a081-account-create-update-dwntx"] Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.446626 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a081-account-create-update-dwntx" Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.449234 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.478370 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a081-account-create-update-dwntx"] Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.496150 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcr9n\" (UniqueName: \"kubernetes.io/projected/bc47afcf-b663-41be-86d7-a77108e5020c-kube-api-access-qcr9n\") pod \"nova-api-db-create-5blmr\" (UID: \"bc47afcf-b663-41be-86d7-a77108e5020c\") " pod="openstack/nova-api-db-create-5blmr" Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.496219 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc47afcf-b663-41be-86d7-a77108e5020c-operator-scripts\") pod \"nova-api-db-create-5blmr\" (UID: \"bc47afcf-b663-41be-86d7-a77108e5020c\") " pod="openstack/nova-api-db-create-5blmr" Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.546437 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-s8tch"] Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.549849 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-s8tch" Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.571318 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-s8tch"] Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.598304 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tj5v\" (UniqueName: \"kubernetes.io/projected/66655803-661a-4934-8483-30529581438f-kube-api-access-9tj5v\") pod \"nova-api-a081-account-create-update-dwntx\" (UID: \"66655803-661a-4934-8483-30529581438f\") " pod="openstack/nova-api-a081-account-create-update-dwntx" Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.598634 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66655803-661a-4934-8483-30529581438f-operator-scripts\") pod \"nova-api-a081-account-create-update-dwntx\" (UID: \"66655803-661a-4934-8483-30529581438f\") " pod="openstack/nova-api-a081-account-create-update-dwntx" Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.599059 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7d57\" (UniqueName: \"kubernetes.io/projected/3354a33b-f658-4c99-a32c-015e29ab16e4-kube-api-access-w7d57\") pod \"nova-cell0-db-create-9pfvt\" (UID: \"3354a33b-f658-4c99-a32c-015e29ab16e4\") " pod="openstack/nova-cell0-db-create-9pfvt" Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.599105 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3354a33b-f658-4c99-a32c-015e29ab16e4-operator-scripts\") pod \"nova-cell0-db-create-9pfvt\" (UID: \"3354a33b-f658-4c99-a32c-015e29ab16e4\") " pod="openstack/nova-cell0-db-create-9pfvt" Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.599223 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcr9n\" (UniqueName: \"kubernetes.io/projected/bc47afcf-b663-41be-86d7-a77108e5020c-kube-api-access-qcr9n\") pod \"nova-api-db-create-5blmr\" (UID: \"bc47afcf-b663-41be-86d7-a77108e5020c\") " pod="openstack/nova-api-db-create-5blmr" Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.599270 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc47afcf-b663-41be-86d7-a77108e5020c-operator-scripts\") pod \"nova-api-db-create-5blmr\" (UID: \"bc47afcf-b663-41be-86d7-a77108e5020c\") " pod="openstack/nova-api-db-create-5blmr" Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.599993 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc47afcf-b663-41be-86d7-a77108e5020c-operator-scripts\") pod \"nova-api-db-create-5blmr\" (UID: \"bc47afcf-b663-41be-86d7-a77108e5020c\") " pod="openstack/nova-api-db-create-5blmr" Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.622366 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcr9n\" (UniqueName: \"kubernetes.io/projected/bc47afcf-b663-41be-86d7-a77108e5020c-kube-api-access-qcr9n\") pod \"nova-api-db-create-5blmr\" (UID: \"bc47afcf-b663-41be-86d7-a77108e5020c\") " pod="openstack/nova-api-db-create-5blmr" Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.641309 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cacf-account-create-update-vg5lk"] Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.642702 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cacf-account-create-update-vg5lk" Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.644388 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.647812 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cacf-account-create-update-vg5lk"] Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.654156 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-5blmr" Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.703129 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66655803-661a-4934-8483-30529581438f-operator-scripts\") pod \"nova-api-a081-account-create-update-dwntx\" (UID: \"66655803-661a-4934-8483-30529581438f\") " pod="openstack/nova-api-a081-account-create-update-dwntx" Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.703415 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7d57\" (UniqueName: \"kubernetes.io/projected/3354a33b-f658-4c99-a32c-015e29ab16e4-kube-api-access-w7d57\") pod \"nova-cell0-db-create-9pfvt\" (UID: \"3354a33b-f658-4c99-a32c-015e29ab16e4\") " pod="openstack/nova-cell0-db-create-9pfvt" Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.703444 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3354a33b-f658-4c99-a32c-015e29ab16e4-operator-scripts\") pod \"nova-cell0-db-create-9pfvt\" (UID: \"3354a33b-f658-4c99-a32c-015e29ab16e4\") " pod="openstack/nova-cell0-db-create-9pfvt" Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.703490 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn9qx\" (UniqueName: \"kubernetes.io/projected/3bec1c9c-b1e2-4dbc-8f35-e3b1d13f0dc8-kube-api-access-mn9qx\") pod \"nova-cell1-db-create-s8tch\" (UID: \"3bec1c9c-b1e2-4dbc-8f35-e3b1d13f0dc8\") " pod="openstack/nova-cell1-db-create-s8tch" Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.703594 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tj5v\" (UniqueName: \"kubernetes.io/projected/66655803-661a-4934-8483-30529581438f-kube-api-access-9tj5v\") pod \"nova-api-a081-account-create-update-dwntx\" (UID: \"66655803-661a-4934-8483-30529581438f\") " pod="openstack/nova-api-a081-account-create-update-dwntx" Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.703636 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bec1c9c-b1e2-4dbc-8f35-e3b1d13f0dc8-operator-scripts\") pod \"nova-cell1-db-create-s8tch\" (UID: \"3bec1c9c-b1e2-4dbc-8f35-e3b1d13f0dc8\") " pod="openstack/nova-cell1-db-create-s8tch" Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.704105 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66655803-661a-4934-8483-30529581438f-operator-scripts\") pod \"nova-api-a081-account-create-update-dwntx\" (UID: \"66655803-661a-4934-8483-30529581438f\") " pod="openstack/nova-api-a081-account-create-update-dwntx" Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.704176 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3354a33b-f658-4c99-a32c-015e29ab16e4-operator-scripts\") pod \"nova-cell0-db-create-9pfvt\" (UID: \"3354a33b-f658-4c99-a32c-015e29ab16e4\") " pod="openstack/nova-cell0-db-create-9pfvt" Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.720550 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tj5v\" (UniqueName: \"kubernetes.io/projected/66655803-661a-4934-8483-30529581438f-kube-api-access-9tj5v\") pod \"nova-api-a081-account-create-update-dwntx\" (UID: \"66655803-661a-4934-8483-30529581438f\") " pod="openstack/nova-api-a081-account-create-update-dwntx" Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.722709 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7d57\" (UniqueName: \"kubernetes.io/projected/3354a33b-f658-4c99-a32c-015e29ab16e4-kube-api-access-w7d57\") pod \"nova-cell0-db-create-9pfvt\" (UID: \"3354a33b-f658-4c99-a32c-015e29ab16e4\") " pod="openstack/nova-cell0-db-create-9pfvt" Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.744293 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-9pfvt" Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.759190 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a081-account-create-update-dwntx" Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.805732 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8da8364f-08ef-4037-96ef-560876f54025-operator-scripts\") pod \"nova-cell0-cacf-account-create-update-vg5lk\" (UID: \"8da8364f-08ef-4037-96ef-560876f54025\") " pod="openstack/nova-cell0-cacf-account-create-update-vg5lk" Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.805820 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bec1c9c-b1e2-4dbc-8f35-e3b1d13f0dc8-operator-scripts\") pod \"nova-cell1-db-create-s8tch\" (UID: \"3bec1c9c-b1e2-4dbc-8f35-e3b1d13f0dc8\") " pod="openstack/nova-cell1-db-create-s8tch" Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.805965 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn9qx\" (UniqueName: \"kubernetes.io/projected/3bec1c9c-b1e2-4dbc-8f35-e3b1d13f0dc8-kube-api-access-mn9qx\") pod \"nova-cell1-db-create-s8tch\" (UID: \"3bec1c9c-b1e2-4dbc-8f35-e3b1d13f0dc8\") " pod="openstack/nova-cell1-db-create-s8tch" Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.806082 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5929h\" (UniqueName: \"kubernetes.io/projected/8da8364f-08ef-4037-96ef-560876f54025-kube-api-access-5929h\") pod \"nova-cell0-cacf-account-create-update-vg5lk\" (UID: \"8da8364f-08ef-4037-96ef-560876f54025\") " pod="openstack/nova-cell0-cacf-account-create-update-vg5lk" Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.806755 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bec1c9c-b1e2-4dbc-8f35-e3b1d13f0dc8-operator-scripts\") pod \"nova-cell1-db-create-s8tch\" (UID: \"3bec1c9c-b1e2-4dbc-8f35-e3b1d13f0dc8\") " pod="openstack/nova-cell1-db-create-s8tch" Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.837561 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn9qx\" (UniqueName: \"kubernetes.io/projected/3bec1c9c-b1e2-4dbc-8f35-e3b1d13f0dc8-kube-api-access-mn9qx\") pod \"nova-cell1-db-create-s8tch\" (UID: \"3bec1c9c-b1e2-4dbc-8f35-e3b1d13f0dc8\") " pod="openstack/nova-cell1-db-create-s8tch" Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.851409 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-8e39-account-create-update-db98v"] Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.852826 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8e39-account-create-update-db98v" Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.854967 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.865762 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-s8tch" Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.871866 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-8e39-account-create-update-db98v"] Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.908739 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5929h\" (UniqueName: \"kubernetes.io/projected/8da8364f-08ef-4037-96ef-560876f54025-kube-api-access-5929h\") pod \"nova-cell0-cacf-account-create-update-vg5lk\" (UID: \"8da8364f-08ef-4037-96ef-560876f54025\") " pod="openstack/nova-cell0-cacf-account-create-update-vg5lk" Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.908901 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8da8364f-08ef-4037-96ef-560876f54025-operator-scripts\") pod \"nova-cell0-cacf-account-create-update-vg5lk\" (UID: \"8da8364f-08ef-4037-96ef-560876f54025\") " pod="openstack/nova-cell0-cacf-account-create-update-vg5lk" Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.909918 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8da8364f-08ef-4037-96ef-560876f54025-operator-scripts\") pod \"nova-cell0-cacf-account-create-update-vg5lk\" (UID: \"8da8364f-08ef-4037-96ef-560876f54025\") " pod="openstack/nova-cell0-cacf-account-create-update-vg5lk" Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.927888 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5929h\" (UniqueName: \"kubernetes.io/projected/8da8364f-08ef-4037-96ef-560876f54025-kube-api-access-5929h\") pod \"nova-cell0-cacf-account-create-update-vg5lk\" (UID: \"8da8364f-08ef-4037-96ef-560876f54025\") " pod="openstack/nova-cell0-cacf-account-create-update-vg5lk" Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.995965 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 15 05:53:45 crc kubenswrapper[4747]: I1215 05:53:45.996000 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 15 05:53:46 crc kubenswrapper[4747]: I1215 05:53:46.010784 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrpq2\" (UniqueName: \"kubernetes.io/projected/917c48fe-e9b6-40da-8a57-a107fb5beb34-kube-api-access-jrpq2\") pod \"nova-cell1-8e39-account-create-update-db98v\" (UID: \"917c48fe-e9b6-40da-8a57-a107fb5beb34\") " pod="openstack/nova-cell1-8e39-account-create-update-db98v" Dec 15 05:53:46 crc kubenswrapper[4747]: I1215 05:53:46.010825 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/917c48fe-e9b6-40da-8a57-a107fb5beb34-operator-scripts\") pod \"nova-cell1-8e39-account-create-update-db98v\" (UID: \"917c48fe-e9b6-40da-8a57-a107fb5beb34\") " pod="openstack/nova-cell1-8e39-account-create-update-db98v" Dec 15 05:53:46 crc kubenswrapper[4747]: I1215 05:53:46.065000 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cacf-account-create-update-vg5lk" Dec 15 05:53:46 crc kubenswrapper[4747]: I1215 05:53:46.115751 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrpq2\" (UniqueName: \"kubernetes.io/projected/917c48fe-e9b6-40da-8a57-a107fb5beb34-kube-api-access-jrpq2\") pod \"nova-cell1-8e39-account-create-update-db98v\" (UID: \"917c48fe-e9b6-40da-8a57-a107fb5beb34\") " pod="openstack/nova-cell1-8e39-account-create-update-db98v" Dec 15 05:53:46 crc kubenswrapper[4747]: I1215 05:53:46.115802 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/917c48fe-e9b6-40da-8a57-a107fb5beb34-operator-scripts\") pod \"nova-cell1-8e39-account-create-update-db98v\" (UID: \"917c48fe-e9b6-40da-8a57-a107fb5beb34\") " pod="openstack/nova-cell1-8e39-account-create-update-db98v" Dec 15 05:53:46 crc kubenswrapper[4747]: I1215 05:53:46.116185 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-5blmr"] Dec 15 05:53:46 crc kubenswrapper[4747]: I1215 05:53:46.116950 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/917c48fe-e9b6-40da-8a57-a107fb5beb34-operator-scripts\") pod \"nova-cell1-8e39-account-create-update-db98v\" (UID: \"917c48fe-e9b6-40da-8a57-a107fb5beb34\") " pod="openstack/nova-cell1-8e39-account-create-update-db98v" Dec 15 05:53:46 crc kubenswrapper[4747]: I1215 05:53:46.139698 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrpq2\" (UniqueName: \"kubernetes.io/projected/917c48fe-e9b6-40da-8a57-a107fb5beb34-kube-api-access-jrpq2\") pod \"nova-cell1-8e39-account-create-update-db98v\" (UID: \"917c48fe-e9b6-40da-8a57-a107fb5beb34\") " pod="openstack/nova-cell1-8e39-account-create-update-db98v" Dec 15 05:53:46 crc kubenswrapper[4747]: I1215 05:53:46.178318 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8e39-account-create-update-db98v" Dec 15 05:53:46 crc kubenswrapper[4747]: I1215 05:53:46.253079 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a081-account-create-update-dwntx"] Dec 15 05:53:46 crc kubenswrapper[4747]: I1215 05:53:46.259660 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-9pfvt"] Dec 15 05:53:46 crc kubenswrapper[4747]: I1215 05:53:46.443092 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-s8tch"] Dec 15 05:53:46 crc kubenswrapper[4747]: I1215 05:53:46.560020 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cacf-account-create-update-vg5lk"] Dec 15 05:53:46 crc kubenswrapper[4747]: I1215 05:53:46.881548 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-8e39-account-create-update-db98v"] Dec 15 05:53:46 crc kubenswrapper[4747]: W1215 05:53:46.915178 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod917c48fe_e9b6_40da_8a57_a107fb5beb34.slice/crio-89ed43364e014b3f140c51ee3c7977292baa05f8377ac7506aa404d0f8828d3c WatchSource:0}: Error finding container 89ed43364e014b3f140c51ee3c7977292baa05f8377ac7506aa404d0f8828d3c: Status 404 returned error can't find the container with id 89ed43364e014b3f140c51ee3c7977292baa05f8377ac7506aa404d0f8828d3c Dec 15 05:53:47 crc kubenswrapper[4747]: I1215 05:53:47.005821 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8e39-account-create-update-db98v" event={"ID":"917c48fe-e9b6-40da-8a57-a107fb5beb34","Type":"ContainerStarted","Data":"89ed43364e014b3f140c51ee3c7977292baa05f8377ac7506aa404d0f8828d3c"} Dec 15 05:53:47 crc kubenswrapper[4747]: I1215 05:53:47.007894 4747 generic.go:334] "Generic (PLEG): container finished" podID="66655803-661a-4934-8483-30529581438f" containerID="178d5b50270826bfe9b0439a5c6c151f3ebf51b85256c9a442ab371b84c67ab3" exitCode=0 Dec 15 05:53:47 crc kubenswrapper[4747]: I1215 05:53:47.008029 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a081-account-create-update-dwntx" event={"ID":"66655803-661a-4934-8483-30529581438f","Type":"ContainerDied","Data":"178d5b50270826bfe9b0439a5c6c151f3ebf51b85256c9a442ab371b84c67ab3"} Dec 15 05:53:47 crc kubenswrapper[4747]: I1215 05:53:47.008067 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a081-account-create-update-dwntx" event={"ID":"66655803-661a-4934-8483-30529581438f","Type":"ContainerStarted","Data":"20172a720020a871a837b40879ad1f91f69d98693dad55e51263e56e048aac42"} Dec 15 05:53:47 crc kubenswrapper[4747]: I1215 05:53:47.010948 4747 generic.go:334] "Generic (PLEG): container finished" podID="3354a33b-f658-4c99-a32c-015e29ab16e4" containerID="e810a50b16e0881729df1f9e177fda73e8d0d5f4dbeefab0fdeccaad1023164d" exitCode=0 Dec 15 05:53:47 crc kubenswrapper[4747]: I1215 05:53:47.011065 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-9pfvt" event={"ID":"3354a33b-f658-4c99-a32c-015e29ab16e4","Type":"ContainerDied","Data":"e810a50b16e0881729df1f9e177fda73e8d0d5f4dbeefab0fdeccaad1023164d"} Dec 15 05:53:47 crc kubenswrapper[4747]: I1215 05:53:47.011138 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-9pfvt" event={"ID":"3354a33b-f658-4c99-a32c-015e29ab16e4","Type":"ContainerStarted","Data":"9353cd6cc31c4e27cfdb305a9893c1dff24eb08a5dce387cfece02c4d49de125"} Dec 15 05:53:47 crc kubenswrapper[4747]: I1215 05:53:47.016113 4747 generic.go:334] "Generic (PLEG): container finished" podID="bc47afcf-b663-41be-86d7-a77108e5020c" containerID="96331b574620b4ea0a307673cddaea687954765986a0af72ca26326fe203e840" exitCode=0 Dec 15 05:53:47 crc kubenswrapper[4747]: I1215 05:53:47.016153 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-5blmr" event={"ID":"bc47afcf-b663-41be-86d7-a77108e5020c","Type":"ContainerDied","Data":"96331b574620b4ea0a307673cddaea687954765986a0af72ca26326fe203e840"} Dec 15 05:53:47 crc kubenswrapper[4747]: I1215 05:53:47.016168 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-5blmr" event={"ID":"bc47afcf-b663-41be-86d7-a77108e5020c","Type":"ContainerStarted","Data":"5dc1c960e9a37656fb3e994db84f01b013b7a2533a118707ce8513ac1a5cd813"} Dec 15 05:53:47 crc kubenswrapper[4747]: I1215 05:53:47.019578 4747 generic.go:334] "Generic (PLEG): container finished" podID="3bec1c9c-b1e2-4dbc-8f35-e3b1d13f0dc8" containerID="e6032e14ced0acb946136bfffaa2d72e396d3f688f6660a947f811def5d1a680" exitCode=0 Dec 15 05:53:47 crc kubenswrapper[4747]: I1215 05:53:47.019614 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-s8tch" event={"ID":"3bec1c9c-b1e2-4dbc-8f35-e3b1d13f0dc8","Type":"ContainerDied","Data":"e6032e14ced0acb946136bfffaa2d72e396d3f688f6660a947f811def5d1a680"} Dec 15 05:53:47 crc kubenswrapper[4747]: I1215 05:53:47.019632 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-s8tch" event={"ID":"3bec1c9c-b1e2-4dbc-8f35-e3b1d13f0dc8","Type":"ContainerStarted","Data":"ed8930a129a43648e154cfe8cdef41c1ce835cbdab571d4362d6127cdc3d1ff9"} Dec 15 05:53:47 crc kubenswrapper[4747]: I1215 05:53:47.026781 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cacf-account-create-update-vg5lk" event={"ID":"8da8364f-08ef-4037-96ef-560876f54025","Type":"ContainerStarted","Data":"580f497568c8c4f93f713faa404a57739a9c47b7e782003d2c969f608c02b90e"} Dec 15 05:53:47 crc kubenswrapper[4747]: I1215 05:53:47.026823 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cacf-account-create-update-vg5lk" event={"ID":"8da8364f-08ef-4037-96ef-560876f54025","Type":"ContainerStarted","Data":"d345f646dd674eb9a37ed366473795f3381da6e1ad9e0b5ae1b0bbe812447434"} Dec 15 05:53:47 crc kubenswrapper[4747]: I1215 05:53:47.077017 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cacf-account-create-update-vg5lk" podStartSLOduration=2.076999856 podStartE2EDuration="2.076999856s" podCreationTimestamp="2025-12-15 05:53:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:53:47.067858666 +0000 UTC m=+990.764370573" watchObservedRunningTime="2025-12-15 05:53:47.076999856 +0000 UTC m=+990.773511773" Dec 15 05:53:47 crc kubenswrapper[4747]: I1215 05:53:47.743490 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 15 05:53:47 crc kubenswrapper[4747]: I1215 05:53:47.870095 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 15 05:53:48 crc kubenswrapper[4747]: I1215 05:53:48.034722 4747 generic.go:334] "Generic (PLEG): container finished" podID="8da8364f-08ef-4037-96ef-560876f54025" containerID="580f497568c8c4f93f713faa404a57739a9c47b7e782003d2c969f608c02b90e" exitCode=0 Dec 15 05:53:48 crc kubenswrapper[4747]: I1215 05:53:48.034837 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cacf-account-create-update-vg5lk" event={"ID":"8da8364f-08ef-4037-96ef-560876f54025","Type":"ContainerDied","Data":"580f497568c8c4f93f713faa404a57739a9c47b7e782003d2c969f608c02b90e"} Dec 15 05:53:48 crc kubenswrapper[4747]: I1215 05:53:48.036470 4747 generic.go:334] "Generic (PLEG): container finished" podID="917c48fe-e9b6-40da-8a57-a107fb5beb34" containerID="8a2204e885c93c3a32600c667ccdee090d373222a7bbbd8f9129a9d3689958d7" exitCode=0 Dec 15 05:53:48 crc kubenswrapper[4747]: I1215 05:53:48.036511 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8e39-account-create-update-db98v" event={"ID":"917c48fe-e9b6-40da-8a57-a107fb5beb34","Type":"ContainerDied","Data":"8a2204e885c93c3a32600c667ccdee090d373222a7bbbd8f9129a9d3689958d7"} Dec 15 05:53:48 crc kubenswrapper[4747]: I1215 05:53:48.259371 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 15 05:53:48 crc kubenswrapper[4747]: I1215 05:53:48.259418 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 15 05:53:48 crc kubenswrapper[4747]: I1215 05:53:48.296047 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 15 05:53:48 crc kubenswrapper[4747]: I1215 05:53:48.302298 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 15 05:53:48 crc kubenswrapper[4747]: I1215 05:53:48.405384 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-5blmr" Dec 15 05:53:48 crc kubenswrapper[4747]: I1215 05:53:48.474053 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcr9n\" (UniqueName: \"kubernetes.io/projected/bc47afcf-b663-41be-86d7-a77108e5020c-kube-api-access-qcr9n\") pod \"bc47afcf-b663-41be-86d7-a77108e5020c\" (UID: \"bc47afcf-b663-41be-86d7-a77108e5020c\") " Dec 15 05:53:48 crc kubenswrapper[4747]: I1215 05:53:48.474240 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc47afcf-b663-41be-86d7-a77108e5020c-operator-scripts\") pod \"bc47afcf-b663-41be-86d7-a77108e5020c\" (UID: \"bc47afcf-b663-41be-86d7-a77108e5020c\") " Dec 15 05:53:48 crc kubenswrapper[4747]: I1215 05:53:48.475991 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc47afcf-b663-41be-86d7-a77108e5020c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bc47afcf-b663-41be-86d7-a77108e5020c" (UID: "bc47afcf-b663-41be-86d7-a77108e5020c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:53:48 crc kubenswrapper[4747]: I1215 05:53:48.480991 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc47afcf-b663-41be-86d7-a77108e5020c-kube-api-access-qcr9n" (OuterVolumeSpecName: "kube-api-access-qcr9n") pod "bc47afcf-b663-41be-86d7-a77108e5020c" (UID: "bc47afcf-b663-41be-86d7-a77108e5020c"). InnerVolumeSpecName "kube-api-access-qcr9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:53:48 crc kubenswrapper[4747]: I1215 05:53:48.558309 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-9pfvt" Dec 15 05:53:48 crc kubenswrapper[4747]: I1215 05:53:48.579969 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcr9n\" (UniqueName: \"kubernetes.io/projected/bc47afcf-b663-41be-86d7-a77108e5020c-kube-api-access-qcr9n\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:48 crc kubenswrapper[4747]: I1215 05:53:48.579998 4747 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc47afcf-b663-41be-86d7-a77108e5020c-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:48 crc kubenswrapper[4747]: I1215 05:53:48.585719 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a081-account-create-update-dwntx" Dec 15 05:53:48 crc kubenswrapper[4747]: I1215 05:53:48.588173 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-s8tch" Dec 15 05:53:48 crc kubenswrapper[4747]: I1215 05:53:48.681163 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tj5v\" (UniqueName: \"kubernetes.io/projected/66655803-661a-4934-8483-30529581438f-kube-api-access-9tj5v\") pod \"66655803-661a-4934-8483-30529581438f\" (UID: \"66655803-661a-4934-8483-30529581438f\") " Dec 15 05:53:48 crc kubenswrapper[4747]: I1215 05:53:48.681341 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66655803-661a-4934-8483-30529581438f-operator-scripts\") pod \"66655803-661a-4934-8483-30529581438f\" (UID: \"66655803-661a-4934-8483-30529581438f\") " Dec 15 05:53:48 crc kubenswrapper[4747]: I1215 05:53:48.681467 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bec1c9c-b1e2-4dbc-8f35-e3b1d13f0dc8-operator-scripts\") pod \"3bec1c9c-b1e2-4dbc-8f35-e3b1d13f0dc8\" (UID: \"3bec1c9c-b1e2-4dbc-8f35-e3b1d13f0dc8\") " Dec 15 05:53:48 crc kubenswrapper[4747]: I1215 05:53:48.681529 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mn9qx\" (UniqueName: \"kubernetes.io/projected/3bec1c9c-b1e2-4dbc-8f35-e3b1d13f0dc8-kube-api-access-mn9qx\") pod \"3bec1c9c-b1e2-4dbc-8f35-e3b1d13f0dc8\" (UID: \"3bec1c9c-b1e2-4dbc-8f35-e3b1d13f0dc8\") " Dec 15 05:53:48 crc kubenswrapper[4747]: I1215 05:53:48.681606 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3354a33b-f658-4c99-a32c-015e29ab16e4-operator-scripts\") pod \"3354a33b-f658-4c99-a32c-015e29ab16e4\" (UID: \"3354a33b-f658-4c99-a32c-015e29ab16e4\") " Dec 15 05:53:48 crc kubenswrapper[4747]: I1215 05:53:48.681703 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7d57\" (UniqueName: \"kubernetes.io/projected/3354a33b-f658-4c99-a32c-015e29ab16e4-kube-api-access-w7d57\") pod \"3354a33b-f658-4c99-a32c-015e29ab16e4\" (UID: \"3354a33b-f658-4c99-a32c-015e29ab16e4\") " Dec 15 05:53:48 crc kubenswrapper[4747]: I1215 05:53:48.682034 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66655803-661a-4934-8483-30529581438f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "66655803-661a-4934-8483-30529581438f" (UID: "66655803-661a-4934-8483-30529581438f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:53:48 crc kubenswrapper[4747]: I1215 05:53:48.682100 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3354a33b-f658-4c99-a32c-015e29ab16e4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3354a33b-f658-4c99-a32c-015e29ab16e4" (UID: "3354a33b-f658-4c99-a32c-015e29ab16e4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:53:48 crc kubenswrapper[4747]: I1215 05:53:48.682382 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bec1c9c-b1e2-4dbc-8f35-e3b1d13f0dc8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3bec1c9c-b1e2-4dbc-8f35-e3b1d13f0dc8" (UID: "3bec1c9c-b1e2-4dbc-8f35-e3b1d13f0dc8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:53:48 crc kubenswrapper[4747]: I1215 05:53:48.682497 4747 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66655803-661a-4934-8483-30529581438f-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:48 crc kubenswrapper[4747]: I1215 05:53:48.682521 4747 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3354a33b-f658-4c99-a32c-015e29ab16e4-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:48 crc kubenswrapper[4747]: I1215 05:53:48.685179 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3354a33b-f658-4c99-a32c-015e29ab16e4-kube-api-access-w7d57" (OuterVolumeSpecName: "kube-api-access-w7d57") pod "3354a33b-f658-4c99-a32c-015e29ab16e4" (UID: "3354a33b-f658-4c99-a32c-015e29ab16e4"). InnerVolumeSpecName "kube-api-access-w7d57". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:53:48 crc kubenswrapper[4747]: I1215 05:53:48.685898 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66655803-661a-4934-8483-30529581438f-kube-api-access-9tj5v" (OuterVolumeSpecName: "kube-api-access-9tj5v") pod "66655803-661a-4934-8483-30529581438f" (UID: "66655803-661a-4934-8483-30529581438f"). InnerVolumeSpecName "kube-api-access-9tj5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:53:48 crc kubenswrapper[4747]: I1215 05:53:48.685970 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bec1c9c-b1e2-4dbc-8f35-e3b1d13f0dc8-kube-api-access-mn9qx" (OuterVolumeSpecName: "kube-api-access-mn9qx") pod "3bec1c9c-b1e2-4dbc-8f35-e3b1d13f0dc8" (UID: "3bec1c9c-b1e2-4dbc-8f35-e3b1d13f0dc8"). InnerVolumeSpecName "kube-api-access-mn9qx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:53:48 crc kubenswrapper[4747]: I1215 05:53:48.785733 4747 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bec1c9c-b1e2-4dbc-8f35-e3b1d13f0dc8-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:48 crc kubenswrapper[4747]: I1215 05:53:48.785773 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mn9qx\" (UniqueName: \"kubernetes.io/projected/3bec1c9c-b1e2-4dbc-8f35-e3b1d13f0dc8-kube-api-access-mn9qx\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:48 crc kubenswrapper[4747]: I1215 05:53:48.785790 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7d57\" (UniqueName: \"kubernetes.io/projected/3354a33b-f658-4c99-a32c-015e29ab16e4-kube-api-access-w7d57\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:48 crc kubenswrapper[4747]: I1215 05:53:48.785803 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tj5v\" (UniqueName: \"kubernetes.io/projected/66655803-661a-4934-8483-30529581438f-kube-api-access-9tj5v\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:49 crc kubenswrapper[4747]: I1215 05:53:49.047194 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-s8tch" event={"ID":"3bec1c9c-b1e2-4dbc-8f35-e3b1d13f0dc8","Type":"ContainerDied","Data":"ed8930a129a43648e154cfe8cdef41c1ce835cbdab571d4362d6127cdc3d1ff9"} Dec 15 05:53:49 crc kubenswrapper[4747]: I1215 05:53:49.047282 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed8930a129a43648e154cfe8cdef41c1ce835cbdab571d4362d6127cdc3d1ff9" Dec 15 05:53:49 crc kubenswrapper[4747]: I1215 05:53:49.047239 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-s8tch" Dec 15 05:53:49 crc kubenswrapper[4747]: I1215 05:53:49.048831 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a081-account-create-update-dwntx" event={"ID":"66655803-661a-4934-8483-30529581438f","Type":"ContainerDied","Data":"20172a720020a871a837b40879ad1f91f69d98693dad55e51263e56e048aac42"} Dec 15 05:53:49 crc kubenswrapper[4747]: I1215 05:53:49.048860 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20172a720020a871a837b40879ad1f91f69d98693dad55e51263e56e048aac42" Dec 15 05:53:49 crc kubenswrapper[4747]: I1215 05:53:49.048899 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a081-account-create-update-dwntx" Dec 15 05:53:49 crc kubenswrapper[4747]: I1215 05:53:49.055751 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-9pfvt" event={"ID":"3354a33b-f658-4c99-a32c-015e29ab16e4","Type":"ContainerDied","Data":"9353cd6cc31c4e27cfdb305a9893c1dff24eb08a5dce387cfece02c4d49de125"} Dec 15 05:53:49 crc kubenswrapper[4747]: I1215 05:53:49.055788 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9353cd6cc31c4e27cfdb305a9893c1dff24eb08a5dce387cfece02c4d49de125" Dec 15 05:53:49 crc kubenswrapper[4747]: I1215 05:53:49.055826 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-9pfvt" Dec 15 05:53:49 crc kubenswrapper[4747]: I1215 05:53:49.063615 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-5blmr" Dec 15 05:53:49 crc kubenswrapper[4747]: I1215 05:53:49.064310 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-5blmr" event={"ID":"bc47afcf-b663-41be-86d7-a77108e5020c","Type":"ContainerDied","Data":"5dc1c960e9a37656fb3e994db84f01b013b7a2533a118707ce8513ac1a5cd813"} Dec 15 05:53:49 crc kubenswrapper[4747]: I1215 05:53:49.064347 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5dc1c960e9a37656fb3e994db84f01b013b7a2533a118707ce8513ac1a5cd813" Dec 15 05:53:49 crc kubenswrapper[4747]: I1215 05:53:49.064503 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 15 05:53:49 crc kubenswrapper[4747]: I1215 05:53:49.065018 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 15 05:53:49 crc kubenswrapper[4747]: I1215 05:53:49.456636 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cacf-account-create-update-vg5lk" Dec 15 05:53:49 crc kubenswrapper[4747]: I1215 05:53:49.461084 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8e39-account-create-update-db98v" Dec 15 05:53:49 crc kubenswrapper[4747]: I1215 05:53:49.609347 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrpq2\" (UniqueName: \"kubernetes.io/projected/917c48fe-e9b6-40da-8a57-a107fb5beb34-kube-api-access-jrpq2\") pod \"917c48fe-e9b6-40da-8a57-a107fb5beb34\" (UID: \"917c48fe-e9b6-40da-8a57-a107fb5beb34\") " Dec 15 05:53:49 crc kubenswrapper[4747]: I1215 05:53:49.609456 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8da8364f-08ef-4037-96ef-560876f54025-operator-scripts\") pod \"8da8364f-08ef-4037-96ef-560876f54025\" (UID: \"8da8364f-08ef-4037-96ef-560876f54025\") " Dec 15 05:53:49 crc kubenswrapper[4747]: I1215 05:53:49.609589 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5929h\" (UniqueName: \"kubernetes.io/projected/8da8364f-08ef-4037-96ef-560876f54025-kube-api-access-5929h\") pod \"8da8364f-08ef-4037-96ef-560876f54025\" (UID: \"8da8364f-08ef-4037-96ef-560876f54025\") " Dec 15 05:53:49 crc kubenswrapper[4747]: I1215 05:53:49.609633 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/917c48fe-e9b6-40da-8a57-a107fb5beb34-operator-scripts\") pod \"917c48fe-e9b6-40da-8a57-a107fb5beb34\" (UID: \"917c48fe-e9b6-40da-8a57-a107fb5beb34\") " Dec 15 05:53:49 crc kubenswrapper[4747]: I1215 05:53:49.610363 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8da8364f-08ef-4037-96ef-560876f54025-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8da8364f-08ef-4037-96ef-560876f54025" (UID: "8da8364f-08ef-4037-96ef-560876f54025"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:53:49 crc kubenswrapper[4747]: I1215 05:53:49.610975 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/917c48fe-e9b6-40da-8a57-a107fb5beb34-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "917c48fe-e9b6-40da-8a57-a107fb5beb34" (UID: "917c48fe-e9b6-40da-8a57-a107fb5beb34"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:53:49 crc kubenswrapper[4747]: I1215 05:53:49.611280 4747 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8da8364f-08ef-4037-96ef-560876f54025-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:49 crc kubenswrapper[4747]: I1215 05:53:49.611306 4747 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/917c48fe-e9b6-40da-8a57-a107fb5beb34-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:49 crc kubenswrapper[4747]: I1215 05:53:49.622837 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/917c48fe-e9b6-40da-8a57-a107fb5beb34-kube-api-access-jrpq2" (OuterVolumeSpecName: "kube-api-access-jrpq2") pod "917c48fe-e9b6-40da-8a57-a107fb5beb34" (UID: "917c48fe-e9b6-40da-8a57-a107fb5beb34"). InnerVolumeSpecName "kube-api-access-jrpq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:53:49 crc kubenswrapper[4747]: I1215 05:53:49.623868 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8da8364f-08ef-4037-96ef-560876f54025-kube-api-access-5929h" (OuterVolumeSpecName: "kube-api-access-5929h") pod "8da8364f-08ef-4037-96ef-560876f54025" (UID: "8da8364f-08ef-4037-96ef-560876f54025"). InnerVolumeSpecName "kube-api-access-5929h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:53:49 crc kubenswrapper[4747]: I1215 05:53:49.713650 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5929h\" (UniqueName: \"kubernetes.io/projected/8da8364f-08ef-4037-96ef-560876f54025-kube-api-access-5929h\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:49 crc kubenswrapper[4747]: I1215 05:53:49.713678 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrpq2\" (UniqueName: \"kubernetes.io/projected/917c48fe-e9b6-40da-8a57-a107fb5beb34-kube-api-access-jrpq2\") on node \"crc\" DevicePath \"\"" Dec 15 05:53:50 crc kubenswrapper[4747]: I1215 05:53:50.080167 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8e39-account-create-update-db98v" Dec 15 05:53:50 crc kubenswrapper[4747]: I1215 05:53:50.081153 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8e39-account-create-update-db98v" event={"ID":"917c48fe-e9b6-40da-8a57-a107fb5beb34","Type":"ContainerDied","Data":"89ed43364e014b3f140c51ee3c7977292baa05f8377ac7506aa404d0f8828d3c"} Dec 15 05:53:50 crc kubenswrapper[4747]: I1215 05:53:50.081267 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89ed43364e014b3f140c51ee3c7977292baa05f8377ac7506aa404d0f8828d3c" Dec 15 05:53:50 crc kubenswrapper[4747]: I1215 05:53:50.089231 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cacf-account-create-update-vg5lk" event={"ID":"8da8364f-08ef-4037-96ef-560876f54025","Type":"ContainerDied","Data":"d345f646dd674eb9a37ed366473795f3381da6e1ad9e0b5ae1b0bbe812447434"} Dec 15 05:53:50 crc kubenswrapper[4747]: I1215 05:53:50.089589 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d345f646dd674eb9a37ed366473795f3381da6e1ad9e0b5ae1b0bbe812447434" Dec 15 05:53:50 crc kubenswrapper[4747]: I1215 05:53:50.089413 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cacf-account-create-update-vg5lk" Dec 15 05:53:50 crc kubenswrapper[4747]: I1215 05:53:50.681615 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 15 05:53:50 crc kubenswrapper[4747]: I1215 05:53:50.791667 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 15 05:53:51 crc kubenswrapper[4747]: I1215 05:53:51.122091 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-pcdnx"] Dec 15 05:53:51 crc kubenswrapper[4747]: E1215 05:53:51.122727 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8da8364f-08ef-4037-96ef-560876f54025" containerName="mariadb-account-create-update" Dec 15 05:53:51 crc kubenswrapper[4747]: I1215 05:53:51.122741 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="8da8364f-08ef-4037-96ef-560876f54025" containerName="mariadb-account-create-update" Dec 15 05:53:51 crc kubenswrapper[4747]: E1215 05:53:51.122763 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3354a33b-f658-4c99-a32c-015e29ab16e4" containerName="mariadb-database-create" Dec 15 05:53:51 crc kubenswrapper[4747]: I1215 05:53:51.122770 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="3354a33b-f658-4c99-a32c-015e29ab16e4" containerName="mariadb-database-create" Dec 15 05:53:51 crc kubenswrapper[4747]: E1215 05:53:51.122779 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc47afcf-b663-41be-86d7-a77108e5020c" containerName="mariadb-database-create" Dec 15 05:53:51 crc kubenswrapper[4747]: I1215 05:53:51.122784 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc47afcf-b663-41be-86d7-a77108e5020c" containerName="mariadb-database-create" Dec 15 05:53:51 crc kubenswrapper[4747]: E1215 05:53:51.122794 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bec1c9c-b1e2-4dbc-8f35-e3b1d13f0dc8" containerName="mariadb-database-create" Dec 15 05:53:51 crc kubenswrapper[4747]: I1215 05:53:51.122800 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bec1c9c-b1e2-4dbc-8f35-e3b1d13f0dc8" containerName="mariadb-database-create" Dec 15 05:53:51 crc kubenswrapper[4747]: E1215 05:53:51.122816 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66655803-661a-4934-8483-30529581438f" containerName="mariadb-account-create-update" Dec 15 05:53:51 crc kubenswrapper[4747]: I1215 05:53:51.122822 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="66655803-661a-4934-8483-30529581438f" containerName="mariadb-account-create-update" Dec 15 05:53:51 crc kubenswrapper[4747]: E1215 05:53:51.122834 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="917c48fe-e9b6-40da-8a57-a107fb5beb34" containerName="mariadb-account-create-update" Dec 15 05:53:51 crc kubenswrapper[4747]: I1215 05:53:51.122840 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="917c48fe-e9b6-40da-8a57-a107fb5beb34" containerName="mariadb-account-create-update" Dec 15 05:53:51 crc kubenswrapper[4747]: I1215 05:53:51.123031 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc47afcf-b663-41be-86d7-a77108e5020c" containerName="mariadb-database-create" Dec 15 05:53:51 crc kubenswrapper[4747]: I1215 05:53:51.123064 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="3354a33b-f658-4c99-a32c-015e29ab16e4" containerName="mariadb-database-create" Dec 15 05:53:51 crc kubenswrapper[4747]: I1215 05:53:51.123076 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="8da8364f-08ef-4037-96ef-560876f54025" containerName="mariadb-account-create-update" Dec 15 05:53:51 crc kubenswrapper[4747]: I1215 05:53:51.123087 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bec1c9c-b1e2-4dbc-8f35-e3b1d13f0dc8" containerName="mariadb-database-create" Dec 15 05:53:51 crc kubenswrapper[4747]: I1215 05:53:51.123098 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="917c48fe-e9b6-40da-8a57-a107fb5beb34" containerName="mariadb-account-create-update" Dec 15 05:53:51 crc kubenswrapper[4747]: I1215 05:53:51.123107 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="66655803-661a-4934-8483-30529581438f" containerName="mariadb-account-create-update" Dec 15 05:53:51 crc kubenswrapper[4747]: I1215 05:53:51.123686 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-pcdnx" Dec 15 05:53:51 crc kubenswrapper[4747]: I1215 05:53:51.125335 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 15 05:53:51 crc kubenswrapper[4747]: I1215 05:53:51.125512 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Dec 15 05:53:51 crc kubenswrapper[4747]: I1215 05:53:51.126136 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-8zkhd" Dec 15 05:53:51 crc kubenswrapper[4747]: I1215 05:53:51.133561 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-pcdnx"] Dec 15 05:53:51 crc kubenswrapper[4747]: I1215 05:53:51.250997 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/258be85d-2a31-4830-b893-b8f20560dd71-scripts\") pod \"nova-cell0-conductor-db-sync-pcdnx\" (UID: \"258be85d-2a31-4830-b893-b8f20560dd71\") " pod="openstack/nova-cell0-conductor-db-sync-pcdnx" Dec 15 05:53:51 crc kubenswrapper[4747]: I1215 05:53:51.251075 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/258be85d-2a31-4830-b893-b8f20560dd71-config-data\") pod \"nova-cell0-conductor-db-sync-pcdnx\" (UID: \"258be85d-2a31-4830-b893-b8f20560dd71\") " pod="openstack/nova-cell0-conductor-db-sync-pcdnx" Dec 15 05:53:51 crc kubenswrapper[4747]: I1215 05:53:51.251097 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gtxc\" (UniqueName: \"kubernetes.io/projected/258be85d-2a31-4830-b893-b8f20560dd71-kube-api-access-6gtxc\") pod \"nova-cell0-conductor-db-sync-pcdnx\" (UID: \"258be85d-2a31-4830-b893-b8f20560dd71\") " pod="openstack/nova-cell0-conductor-db-sync-pcdnx" Dec 15 05:53:51 crc kubenswrapper[4747]: I1215 05:53:51.251349 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/258be85d-2a31-4830-b893-b8f20560dd71-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-pcdnx\" (UID: \"258be85d-2a31-4830-b893-b8f20560dd71\") " pod="openstack/nova-cell0-conductor-db-sync-pcdnx" Dec 15 05:53:51 crc kubenswrapper[4747]: I1215 05:53:51.353094 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/258be85d-2a31-4830-b893-b8f20560dd71-scripts\") pod \"nova-cell0-conductor-db-sync-pcdnx\" (UID: \"258be85d-2a31-4830-b893-b8f20560dd71\") " pod="openstack/nova-cell0-conductor-db-sync-pcdnx" Dec 15 05:53:51 crc kubenswrapper[4747]: I1215 05:53:51.353169 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/258be85d-2a31-4830-b893-b8f20560dd71-config-data\") pod \"nova-cell0-conductor-db-sync-pcdnx\" (UID: \"258be85d-2a31-4830-b893-b8f20560dd71\") " pod="openstack/nova-cell0-conductor-db-sync-pcdnx" Dec 15 05:53:51 crc kubenswrapper[4747]: I1215 05:53:51.353194 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gtxc\" (UniqueName: \"kubernetes.io/projected/258be85d-2a31-4830-b893-b8f20560dd71-kube-api-access-6gtxc\") pod \"nova-cell0-conductor-db-sync-pcdnx\" (UID: \"258be85d-2a31-4830-b893-b8f20560dd71\") " pod="openstack/nova-cell0-conductor-db-sync-pcdnx" Dec 15 05:53:51 crc kubenswrapper[4747]: I1215 05:53:51.353237 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/258be85d-2a31-4830-b893-b8f20560dd71-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-pcdnx\" (UID: \"258be85d-2a31-4830-b893-b8f20560dd71\") " pod="openstack/nova-cell0-conductor-db-sync-pcdnx" Dec 15 05:53:51 crc kubenswrapper[4747]: I1215 05:53:51.358977 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/258be85d-2a31-4830-b893-b8f20560dd71-config-data\") pod \"nova-cell0-conductor-db-sync-pcdnx\" (UID: \"258be85d-2a31-4830-b893-b8f20560dd71\") " pod="openstack/nova-cell0-conductor-db-sync-pcdnx" Dec 15 05:53:51 crc kubenswrapper[4747]: I1215 05:53:51.368847 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/258be85d-2a31-4830-b893-b8f20560dd71-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-pcdnx\" (UID: \"258be85d-2a31-4830-b893-b8f20560dd71\") " pod="openstack/nova-cell0-conductor-db-sync-pcdnx" Dec 15 05:53:51 crc kubenswrapper[4747]: I1215 05:53:51.369169 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gtxc\" (UniqueName: \"kubernetes.io/projected/258be85d-2a31-4830-b893-b8f20560dd71-kube-api-access-6gtxc\") pod \"nova-cell0-conductor-db-sync-pcdnx\" (UID: \"258be85d-2a31-4830-b893-b8f20560dd71\") " pod="openstack/nova-cell0-conductor-db-sync-pcdnx" Dec 15 05:53:51 crc kubenswrapper[4747]: I1215 05:53:51.379844 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/258be85d-2a31-4830-b893-b8f20560dd71-scripts\") pod \"nova-cell0-conductor-db-sync-pcdnx\" (UID: \"258be85d-2a31-4830-b893-b8f20560dd71\") " pod="openstack/nova-cell0-conductor-db-sync-pcdnx" Dec 15 05:53:51 crc kubenswrapper[4747]: I1215 05:53:51.439164 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-pcdnx" Dec 15 05:53:51 crc kubenswrapper[4747]: I1215 05:53:51.859200 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-pcdnx"] Dec 15 05:53:52 crc kubenswrapper[4747]: I1215 05:53:52.107744 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-pcdnx" event={"ID":"258be85d-2a31-4830-b893-b8f20560dd71","Type":"ContainerStarted","Data":"e1f9a4bf01ce57a8ddb51e029c3851e4a6f0f22ad4e16ef86733f46a87090ce4"} Dec 15 05:53:58 crc kubenswrapper[4747]: I1215 05:53:58.865480 4747 patch_prober.go:28] interesting pod/machine-config-daemon-nldtn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 15 05:53:58 crc kubenswrapper[4747]: I1215 05:53:58.866158 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 15 05:53:59 crc kubenswrapper[4747]: I1215 05:53:59.129106 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="01dcd6dc-850f-4ecc-8804-b2998c4cc0ab" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 15 05:54:03 crc kubenswrapper[4747]: I1215 05:54:03.212519 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-pcdnx" event={"ID":"258be85d-2a31-4830-b893-b8f20560dd71","Type":"ContainerStarted","Data":"aaa027b22a2ce0770419f04315fb0731b984d09bbd3214a8278d1d510a94714c"} Dec 15 05:54:03 crc kubenswrapper[4747]: I1215 05:54:03.227523 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-pcdnx" podStartSLOduration=1.102845104 podStartE2EDuration="12.227505384s" podCreationTimestamp="2025-12-15 05:53:51 +0000 UTC" firstStartedPulling="2025-12-15 05:53:51.863527124 +0000 UTC m=+995.560039041" lastFinishedPulling="2025-12-15 05:54:02.988187404 +0000 UTC m=+1006.684699321" observedRunningTime="2025-12-15 05:54:03.223787874 +0000 UTC m=+1006.920299791" watchObservedRunningTime="2025-12-15 05:54:03.227505384 +0000 UTC m=+1006.924017301" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.206468 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.225005 4747 generic.go:334] "Generic (PLEG): container finished" podID="01dcd6dc-850f-4ecc-8804-b2998c4cc0ab" containerID="528035e2e78bcc1014617960a1079b72ab0c9d3e54ac4cdd07d9c73bfd8948e8" exitCode=137 Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.225096 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.225156 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01dcd6dc-850f-4ecc-8804-b2998c4cc0ab","Type":"ContainerDied","Data":"528035e2e78bcc1014617960a1079b72ab0c9d3e54ac4cdd07d9c73bfd8948e8"} Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.225187 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01dcd6dc-850f-4ecc-8804-b2998c4cc0ab","Type":"ContainerDied","Data":"38829fa44528d9797f3189b7bf573545748bfbd08c00d1dedf664bd2304b8899"} Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.225205 4747 scope.go:117] "RemoveContainer" containerID="528035e2e78bcc1014617960a1079b72ab0c9d3e54ac4cdd07d9c73bfd8948e8" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.252545 4747 scope.go:117] "RemoveContainer" containerID="060f2615c80f98f57a981ebf97753de94f57ff94cb5ebcca426cc96fa4420570" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.284585 4747 scope.go:117] "RemoveContainer" containerID="04ea6ec9dee7d98840f079728a4a45058bbd6871c1aa613688cb2fab919a26f0" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.307358 4747 scope.go:117] "RemoveContainer" containerID="092b17ebb6940478f5a6a209ea30bdb7b690dfcbdd47c24a52bed0febada9630" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.333777 4747 scope.go:117] "RemoveContainer" containerID="528035e2e78bcc1014617960a1079b72ab0c9d3e54ac4cdd07d9c73bfd8948e8" Dec 15 05:54:04 crc kubenswrapper[4747]: E1215 05:54:04.334294 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"528035e2e78bcc1014617960a1079b72ab0c9d3e54ac4cdd07d9c73bfd8948e8\": container with ID starting with 528035e2e78bcc1014617960a1079b72ab0c9d3e54ac4cdd07d9c73bfd8948e8 not found: ID does not exist" containerID="528035e2e78bcc1014617960a1079b72ab0c9d3e54ac4cdd07d9c73bfd8948e8" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.334329 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"528035e2e78bcc1014617960a1079b72ab0c9d3e54ac4cdd07d9c73bfd8948e8"} err="failed to get container status \"528035e2e78bcc1014617960a1079b72ab0c9d3e54ac4cdd07d9c73bfd8948e8\": rpc error: code = NotFound desc = could not find container \"528035e2e78bcc1014617960a1079b72ab0c9d3e54ac4cdd07d9c73bfd8948e8\": container with ID starting with 528035e2e78bcc1014617960a1079b72ab0c9d3e54ac4cdd07d9c73bfd8948e8 not found: ID does not exist" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.334358 4747 scope.go:117] "RemoveContainer" containerID="060f2615c80f98f57a981ebf97753de94f57ff94cb5ebcca426cc96fa4420570" Dec 15 05:54:04 crc kubenswrapper[4747]: E1215 05:54:04.334627 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"060f2615c80f98f57a981ebf97753de94f57ff94cb5ebcca426cc96fa4420570\": container with ID starting with 060f2615c80f98f57a981ebf97753de94f57ff94cb5ebcca426cc96fa4420570 not found: ID does not exist" containerID="060f2615c80f98f57a981ebf97753de94f57ff94cb5ebcca426cc96fa4420570" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.334651 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"060f2615c80f98f57a981ebf97753de94f57ff94cb5ebcca426cc96fa4420570"} err="failed to get container status \"060f2615c80f98f57a981ebf97753de94f57ff94cb5ebcca426cc96fa4420570\": rpc error: code = NotFound desc = could not find container \"060f2615c80f98f57a981ebf97753de94f57ff94cb5ebcca426cc96fa4420570\": container with ID starting with 060f2615c80f98f57a981ebf97753de94f57ff94cb5ebcca426cc96fa4420570 not found: ID does not exist" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.334666 4747 scope.go:117] "RemoveContainer" containerID="04ea6ec9dee7d98840f079728a4a45058bbd6871c1aa613688cb2fab919a26f0" Dec 15 05:54:04 crc kubenswrapper[4747]: E1215 05:54:04.335337 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04ea6ec9dee7d98840f079728a4a45058bbd6871c1aa613688cb2fab919a26f0\": container with ID starting with 04ea6ec9dee7d98840f079728a4a45058bbd6871c1aa613688cb2fab919a26f0 not found: ID does not exist" containerID="04ea6ec9dee7d98840f079728a4a45058bbd6871c1aa613688cb2fab919a26f0" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.335365 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04ea6ec9dee7d98840f079728a4a45058bbd6871c1aa613688cb2fab919a26f0"} err="failed to get container status \"04ea6ec9dee7d98840f079728a4a45058bbd6871c1aa613688cb2fab919a26f0\": rpc error: code = NotFound desc = could not find container \"04ea6ec9dee7d98840f079728a4a45058bbd6871c1aa613688cb2fab919a26f0\": container with ID starting with 04ea6ec9dee7d98840f079728a4a45058bbd6871c1aa613688cb2fab919a26f0 not found: ID does not exist" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.335382 4747 scope.go:117] "RemoveContainer" containerID="092b17ebb6940478f5a6a209ea30bdb7b690dfcbdd47c24a52bed0febada9630" Dec 15 05:54:04 crc kubenswrapper[4747]: E1215 05:54:04.335822 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"092b17ebb6940478f5a6a209ea30bdb7b690dfcbdd47c24a52bed0febada9630\": container with ID starting with 092b17ebb6940478f5a6a209ea30bdb7b690dfcbdd47c24a52bed0febada9630 not found: ID does not exist" containerID="092b17ebb6940478f5a6a209ea30bdb7b690dfcbdd47c24a52bed0febada9630" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.335844 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"092b17ebb6940478f5a6a209ea30bdb7b690dfcbdd47c24a52bed0febada9630"} err="failed to get container status \"092b17ebb6940478f5a6a209ea30bdb7b690dfcbdd47c24a52bed0febada9630\": rpc error: code = NotFound desc = could not find container \"092b17ebb6940478f5a6a209ea30bdb7b690dfcbdd47c24a52bed0febada9630\": container with ID starting with 092b17ebb6940478f5a6a209ea30bdb7b690dfcbdd47c24a52bed0febada9630 not found: ID does not exist" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.358632 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvtdb\" (UniqueName: \"kubernetes.io/projected/01dcd6dc-850f-4ecc-8804-b2998c4cc0ab-kube-api-access-qvtdb\") pod \"01dcd6dc-850f-4ecc-8804-b2998c4cc0ab\" (UID: \"01dcd6dc-850f-4ecc-8804-b2998c4cc0ab\") " Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.358685 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01dcd6dc-850f-4ecc-8804-b2998c4cc0ab-run-httpd\") pod \"01dcd6dc-850f-4ecc-8804-b2998c4cc0ab\" (UID: \"01dcd6dc-850f-4ecc-8804-b2998c4cc0ab\") " Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.358781 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01dcd6dc-850f-4ecc-8804-b2998c4cc0ab-combined-ca-bundle\") pod \"01dcd6dc-850f-4ecc-8804-b2998c4cc0ab\" (UID: \"01dcd6dc-850f-4ecc-8804-b2998c4cc0ab\") " Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.358823 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01dcd6dc-850f-4ecc-8804-b2998c4cc0ab-config-data\") pod \"01dcd6dc-850f-4ecc-8804-b2998c4cc0ab\" (UID: \"01dcd6dc-850f-4ecc-8804-b2998c4cc0ab\") " Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.358890 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01dcd6dc-850f-4ecc-8804-b2998c4cc0ab-sg-core-conf-yaml\") pod \"01dcd6dc-850f-4ecc-8804-b2998c4cc0ab\" (UID: \"01dcd6dc-850f-4ecc-8804-b2998c4cc0ab\") " Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.358949 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01dcd6dc-850f-4ecc-8804-b2998c4cc0ab-scripts\") pod \"01dcd6dc-850f-4ecc-8804-b2998c4cc0ab\" (UID: \"01dcd6dc-850f-4ecc-8804-b2998c4cc0ab\") " Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.359177 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01dcd6dc-850f-4ecc-8804-b2998c4cc0ab-log-httpd\") pod \"01dcd6dc-850f-4ecc-8804-b2998c4cc0ab\" (UID: \"01dcd6dc-850f-4ecc-8804-b2998c4cc0ab\") " Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.361915 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01dcd6dc-850f-4ecc-8804-b2998c4cc0ab-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "01dcd6dc-850f-4ecc-8804-b2998c4cc0ab" (UID: "01dcd6dc-850f-4ecc-8804-b2998c4cc0ab"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.361979 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01dcd6dc-850f-4ecc-8804-b2998c4cc0ab-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "01dcd6dc-850f-4ecc-8804-b2998c4cc0ab" (UID: "01dcd6dc-850f-4ecc-8804-b2998c4cc0ab"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.366107 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01dcd6dc-850f-4ecc-8804-b2998c4cc0ab-scripts" (OuterVolumeSpecName: "scripts") pod "01dcd6dc-850f-4ecc-8804-b2998c4cc0ab" (UID: "01dcd6dc-850f-4ecc-8804-b2998c4cc0ab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.366383 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01dcd6dc-850f-4ecc-8804-b2998c4cc0ab-kube-api-access-qvtdb" (OuterVolumeSpecName: "kube-api-access-qvtdb") pod "01dcd6dc-850f-4ecc-8804-b2998c4cc0ab" (UID: "01dcd6dc-850f-4ecc-8804-b2998c4cc0ab"). InnerVolumeSpecName "kube-api-access-qvtdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.385532 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01dcd6dc-850f-4ecc-8804-b2998c4cc0ab-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "01dcd6dc-850f-4ecc-8804-b2998c4cc0ab" (UID: "01dcd6dc-850f-4ecc-8804-b2998c4cc0ab"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.421683 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01dcd6dc-850f-4ecc-8804-b2998c4cc0ab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "01dcd6dc-850f-4ecc-8804-b2998c4cc0ab" (UID: "01dcd6dc-850f-4ecc-8804-b2998c4cc0ab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.439584 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01dcd6dc-850f-4ecc-8804-b2998c4cc0ab-config-data" (OuterVolumeSpecName: "config-data") pod "01dcd6dc-850f-4ecc-8804-b2998c4cc0ab" (UID: "01dcd6dc-850f-4ecc-8804-b2998c4cc0ab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.462859 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvtdb\" (UniqueName: \"kubernetes.io/projected/01dcd6dc-850f-4ecc-8804-b2998c4cc0ab-kube-api-access-qvtdb\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.462894 4747 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01dcd6dc-850f-4ecc-8804-b2998c4cc0ab-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.462907 4747 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01dcd6dc-850f-4ecc-8804-b2998c4cc0ab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.462920 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01dcd6dc-850f-4ecc-8804-b2998c4cc0ab-config-data\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.462947 4747 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01dcd6dc-850f-4ecc-8804-b2998c4cc0ab-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.462956 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01dcd6dc-850f-4ecc-8804-b2998c4cc0ab-scripts\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.462966 4747 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01dcd6dc-850f-4ecc-8804-b2998c4cc0ab-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.564734 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.579096 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.589036 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 15 05:54:04 crc kubenswrapper[4747]: E1215 05:54:04.589494 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01dcd6dc-850f-4ecc-8804-b2998c4cc0ab" containerName="sg-core" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.589519 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="01dcd6dc-850f-4ecc-8804-b2998c4cc0ab" containerName="sg-core" Dec 15 05:54:04 crc kubenswrapper[4747]: E1215 05:54:04.589541 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01dcd6dc-850f-4ecc-8804-b2998c4cc0ab" containerName="ceilometer-central-agent" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.589549 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="01dcd6dc-850f-4ecc-8804-b2998c4cc0ab" containerName="ceilometer-central-agent" Dec 15 05:54:04 crc kubenswrapper[4747]: E1215 05:54:04.589561 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01dcd6dc-850f-4ecc-8804-b2998c4cc0ab" containerName="ceilometer-notification-agent" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.589567 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="01dcd6dc-850f-4ecc-8804-b2998c4cc0ab" containerName="ceilometer-notification-agent" Dec 15 05:54:04 crc kubenswrapper[4747]: E1215 05:54:04.589575 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01dcd6dc-850f-4ecc-8804-b2998c4cc0ab" containerName="proxy-httpd" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.589581 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="01dcd6dc-850f-4ecc-8804-b2998c4cc0ab" containerName="proxy-httpd" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.589789 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="01dcd6dc-850f-4ecc-8804-b2998c4cc0ab" containerName="ceilometer-notification-agent" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.589808 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="01dcd6dc-850f-4ecc-8804-b2998c4cc0ab" containerName="proxy-httpd" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.589824 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="01dcd6dc-850f-4ecc-8804-b2998c4cc0ab" containerName="ceilometer-central-agent" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.589844 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="01dcd6dc-850f-4ecc-8804-b2998c4cc0ab" containerName="sg-core" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.591537 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.595351 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.595515 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.600997 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.643161 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01dcd6dc-850f-4ecc-8804-b2998c4cc0ab" path="/var/lib/kubelet/pods/01dcd6dc-850f-4ecc-8804-b2998c4cc0ab/volumes" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.771346 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1afd3938-1da2-4c72-811d-fc9ec8f21171-scripts\") pod \"ceilometer-0\" (UID: \"1afd3938-1da2-4c72-811d-fc9ec8f21171\") " pod="openstack/ceilometer-0" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.771562 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1afd3938-1da2-4c72-811d-fc9ec8f21171-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1afd3938-1da2-4c72-811d-fc9ec8f21171\") " pod="openstack/ceilometer-0" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.772360 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdzql\" (UniqueName: \"kubernetes.io/projected/1afd3938-1da2-4c72-811d-fc9ec8f21171-kube-api-access-tdzql\") pod \"ceilometer-0\" (UID: \"1afd3938-1da2-4c72-811d-fc9ec8f21171\") " pod="openstack/ceilometer-0" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.772572 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1afd3938-1da2-4c72-811d-fc9ec8f21171-log-httpd\") pod \"ceilometer-0\" (UID: \"1afd3938-1da2-4c72-811d-fc9ec8f21171\") " pod="openstack/ceilometer-0" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.772750 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1afd3938-1da2-4c72-811d-fc9ec8f21171-run-httpd\") pod \"ceilometer-0\" (UID: \"1afd3938-1da2-4c72-811d-fc9ec8f21171\") " pod="openstack/ceilometer-0" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.772820 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1afd3938-1da2-4c72-811d-fc9ec8f21171-config-data\") pod \"ceilometer-0\" (UID: \"1afd3938-1da2-4c72-811d-fc9ec8f21171\") " pod="openstack/ceilometer-0" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.772905 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1afd3938-1da2-4c72-811d-fc9ec8f21171-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1afd3938-1da2-4c72-811d-fc9ec8f21171\") " pod="openstack/ceilometer-0" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.874786 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1afd3938-1da2-4c72-811d-fc9ec8f21171-log-httpd\") pod \"ceilometer-0\" (UID: \"1afd3938-1da2-4c72-811d-fc9ec8f21171\") " pod="openstack/ceilometer-0" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.874920 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1afd3938-1da2-4c72-811d-fc9ec8f21171-run-httpd\") pod \"ceilometer-0\" (UID: \"1afd3938-1da2-4c72-811d-fc9ec8f21171\") " pod="openstack/ceilometer-0" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.874972 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1afd3938-1da2-4c72-811d-fc9ec8f21171-config-data\") pod \"ceilometer-0\" (UID: \"1afd3938-1da2-4c72-811d-fc9ec8f21171\") " pod="openstack/ceilometer-0" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.875012 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1afd3938-1da2-4c72-811d-fc9ec8f21171-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1afd3938-1da2-4c72-811d-fc9ec8f21171\") " pod="openstack/ceilometer-0" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.875084 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1afd3938-1da2-4c72-811d-fc9ec8f21171-scripts\") pod \"ceilometer-0\" (UID: \"1afd3938-1da2-4c72-811d-fc9ec8f21171\") " pod="openstack/ceilometer-0" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.875114 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1afd3938-1da2-4c72-811d-fc9ec8f21171-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1afd3938-1da2-4c72-811d-fc9ec8f21171\") " pod="openstack/ceilometer-0" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.875172 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdzql\" (UniqueName: \"kubernetes.io/projected/1afd3938-1da2-4c72-811d-fc9ec8f21171-kube-api-access-tdzql\") pod \"ceilometer-0\" (UID: \"1afd3938-1da2-4c72-811d-fc9ec8f21171\") " pod="openstack/ceilometer-0" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.875390 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1afd3938-1da2-4c72-811d-fc9ec8f21171-log-httpd\") pod \"ceilometer-0\" (UID: \"1afd3938-1da2-4c72-811d-fc9ec8f21171\") " pod="openstack/ceilometer-0" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.875548 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1afd3938-1da2-4c72-811d-fc9ec8f21171-run-httpd\") pod \"ceilometer-0\" (UID: \"1afd3938-1da2-4c72-811d-fc9ec8f21171\") " pod="openstack/ceilometer-0" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.880178 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1afd3938-1da2-4c72-811d-fc9ec8f21171-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1afd3938-1da2-4c72-811d-fc9ec8f21171\") " pod="openstack/ceilometer-0" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.880550 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1afd3938-1da2-4c72-811d-fc9ec8f21171-scripts\") pod \"ceilometer-0\" (UID: \"1afd3938-1da2-4c72-811d-fc9ec8f21171\") " pod="openstack/ceilometer-0" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.880681 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1afd3938-1da2-4c72-811d-fc9ec8f21171-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1afd3938-1da2-4c72-811d-fc9ec8f21171\") " pod="openstack/ceilometer-0" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.881581 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1afd3938-1da2-4c72-811d-fc9ec8f21171-config-data\") pod \"ceilometer-0\" (UID: \"1afd3938-1da2-4c72-811d-fc9ec8f21171\") " pod="openstack/ceilometer-0" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.892413 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdzql\" (UniqueName: \"kubernetes.io/projected/1afd3938-1da2-4c72-811d-fc9ec8f21171-kube-api-access-tdzql\") pod \"ceilometer-0\" (UID: \"1afd3938-1da2-4c72-811d-fc9ec8f21171\") " pod="openstack/ceilometer-0" Dec 15 05:54:04 crc kubenswrapper[4747]: I1215 05:54:04.915703 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 15 05:54:05 crc kubenswrapper[4747]: I1215 05:54:05.361206 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 15 05:54:06 crc kubenswrapper[4747]: I1215 05:54:06.244447 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1afd3938-1da2-4c72-811d-fc9ec8f21171","Type":"ContainerStarted","Data":"e7c5738912773c418168bbc3c00c197c2026354e8dc2fbbfbd9e63a52fb8e2f1"} Dec 15 05:54:07 crc kubenswrapper[4747]: I1215 05:54:07.253570 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1afd3938-1da2-4c72-811d-fc9ec8f21171","Type":"ContainerStarted","Data":"a30fa25ec1ba0c2528ed748b740e0d33225cec181c2ae331cdf729efe943fbcf"} Dec 15 05:54:07 crc kubenswrapper[4747]: I1215 05:54:07.253953 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1afd3938-1da2-4c72-811d-fc9ec8f21171","Type":"ContainerStarted","Data":"36cfbd60223f11dd0e6dbbcecb56b0c80f0a9300545de3f3ec074131c3e00be6"} Dec 15 05:54:08 crc kubenswrapper[4747]: I1215 05:54:08.268997 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1afd3938-1da2-4c72-811d-fc9ec8f21171","Type":"ContainerStarted","Data":"ed889bc211ce461cf1e85e83fd6d860227dd9d95280f19c1142a58866f4157ab"} Dec 15 05:54:10 crc kubenswrapper[4747]: I1215 05:54:10.298376 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1afd3938-1da2-4c72-811d-fc9ec8f21171","Type":"ContainerStarted","Data":"b9d1deb8110d848600fee7fc0455b915cf4cffe26ae7bc3ce122843396715989"} Dec 15 05:54:10 crc kubenswrapper[4747]: I1215 05:54:10.298912 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 15 05:54:10 crc kubenswrapper[4747]: I1215 05:54:10.300749 4747 generic.go:334] "Generic (PLEG): container finished" podID="258be85d-2a31-4830-b893-b8f20560dd71" containerID="aaa027b22a2ce0770419f04315fb0731b984d09bbd3214a8278d1d510a94714c" exitCode=0 Dec 15 05:54:10 crc kubenswrapper[4747]: I1215 05:54:10.300802 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-pcdnx" event={"ID":"258be85d-2a31-4830-b893-b8f20560dd71","Type":"ContainerDied","Data":"aaa027b22a2ce0770419f04315fb0731b984d09bbd3214a8278d1d510a94714c"} Dec 15 05:54:10 crc kubenswrapper[4747]: I1215 05:54:10.321159 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.546411653 podStartE2EDuration="6.321146526s" podCreationTimestamp="2025-12-15 05:54:04 +0000 UTC" firstStartedPulling="2025-12-15 05:54:05.348976601 +0000 UTC m=+1009.045488518" lastFinishedPulling="2025-12-15 05:54:09.123711474 +0000 UTC m=+1012.820223391" observedRunningTime="2025-12-15 05:54:10.314160158 +0000 UTC m=+1014.010672075" watchObservedRunningTime="2025-12-15 05:54:10.321146526 +0000 UTC m=+1014.017658443" Dec 15 05:54:11 crc kubenswrapper[4747]: I1215 05:54:11.609780 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-pcdnx" Dec 15 05:54:11 crc kubenswrapper[4747]: I1215 05:54:11.719334 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/258be85d-2a31-4830-b893-b8f20560dd71-scripts\") pod \"258be85d-2a31-4830-b893-b8f20560dd71\" (UID: \"258be85d-2a31-4830-b893-b8f20560dd71\") " Dec 15 05:54:11 crc kubenswrapper[4747]: I1215 05:54:11.719386 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gtxc\" (UniqueName: \"kubernetes.io/projected/258be85d-2a31-4830-b893-b8f20560dd71-kube-api-access-6gtxc\") pod \"258be85d-2a31-4830-b893-b8f20560dd71\" (UID: \"258be85d-2a31-4830-b893-b8f20560dd71\") " Dec 15 05:54:11 crc kubenswrapper[4747]: I1215 05:54:11.719575 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/258be85d-2a31-4830-b893-b8f20560dd71-combined-ca-bundle\") pod \"258be85d-2a31-4830-b893-b8f20560dd71\" (UID: \"258be85d-2a31-4830-b893-b8f20560dd71\") " Dec 15 05:54:11 crc kubenswrapper[4747]: I1215 05:54:11.719709 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/258be85d-2a31-4830-b893-b8f20560dd71-config-data\") pod \"258be85d-2a31-4830-b893-b8f20560dd71\" (UID: \"258be85d-2a31-4830-b893-b8f20560dd71\") " Dec 15 05:54:11 crc kubenswrapper[4747]: I1215 05:54:11.726162 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/258be85d-2a31-4830-b893-b8f20560dd71-kube-api-access-6gtxc" (OuterVolumeSpecName: "kube-api-access-6gtxc") pod "258be85d-2a31-4830-b893-b8f20560dd71" (UID: "258be85d-2a31-4830-b893-b8f20560dd71"). InnerVolumeSpecName "kube-api-access-6gtxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:54:11 crc kubenswrapper[4747]: I1215 05:54:11.733052 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/258be85d-2a31-4830-b893-b8f20560dd71-scripts" (OuterVolumeSpecName: "scripts") pod "258be85d-2a31-4830-b893-b8f20560dd71" (UID: "258be85d-2a31-4830-b893-b8f20560dd71"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:54:11 crc kubenswrapper[4747]: I1215 05:54:11.744295 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/258be85d-2a31-4830-b893-b8f20560dd71-config-data" (OuterVolumeSpecName: "config-data") pod "258be85d-2a31-4830-b893-b8f20560dd71" (UID: "258be85d-2a31-4830-b893-b8f20560dd71"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:54:11 crc kubenswrapper[4747]: I1215 05:54:11.749712 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/258be85d-2a31-4830-b893-b8f20560dd71-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "258be85d-2a31-4830-b893-b8f20560dd71" (UID: "258be85d-2a31-4830-b893-b8f20560dd71"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:54:11 crc kubenswrapper[4747]: I1215 05:54:11.822753 4747 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/258be85d-2a31-4830-b893-b8f20560dd71-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:11 crc kubenswrapper[4747]: I1215 05:54:11.822804 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/258be85d-2a31-4830-b893-b8f20560dd71-config-data\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:11 crc kubenswrapper[4747]: I1215 05:54:11.822816 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/258be85d-2a31-4830-b893-b8f20560dd71-scripts\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:11 crc kubenswrapper[4747]: I1215 05:54:11.822827 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gtxc\" (UniqueName: \"kubernetes.io/projected/258be85d-2a31-4830-b893-b8f20560dd71-kube-api-access-6gtxc\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:12 crc kubenswrapper[4747]: I1215 05:54:12.320442 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-pcdnx" event={"ID":"258be85d-2a31-4830-b893-b8f20560dd71","Type":"ContainerDied","Data":"e1f9a4bf01ce57a8ddb51e029c3851e4a6f0f22ad4e16ef86733f46a87090ce4"} Dec 15 05:54:12 crc kubenswrapper[4747]: I1215 05:54:12.320799 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1f9a4bf01ce57a8ddb51e029c3851e4a6f0f22ad4e16ef86733f46a87090ce4" Dec 15 05:54:12 crc kubenswrapper[4747]: I1215 05:54:12.320562 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-pcdnx" Dec 15 05:54:12 crc kubenswrapper[4747]: I1215 05:54:12.489657 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 15 05:54:12 crc kubenswrapper[4747]: E1215 05:54:12.490023 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="258be85d-2a31-4830-b893-b8f20560dd71" containerName="nova-cell0-conductor-db-sync" Dec 15 05:54:12 crc kubenswrapper[4747]: I1215 05:54:12.490040 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="258be85d-2a31-4830-b893-b8f20560dd71" containerName="nova-cell0-conductor-db-sync" Dec 15 05:54:12 crc kubenswrapper[4747]: I1215 05:54:12.490203 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="258be85d-2a31-4830-b893-b8f20560dd71" containerName="nova-cell0-conductor-db-sync" Dec 15 05:54:12 crc kubenswrapper[4747]: I1215 05:54:12.490730 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 15 05:54:12 crc kubenswrapper[4747]: I1215 05:54:12.492974 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 15 05:54:12 crc kubenswrapper[4747]: I1215 05:54:12.493175 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-8zkhd" Dec 15 05:54:12 crc kubenswrapper[4747]: I1215 05:54:12.496387 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 15 05:54:12 crc kubenswrapper[4747]: I1215 05:54:12.536600 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ead9df5f-294c-464e-b416-743ad9245464-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ead9df5f-294c-464e-b416-743ad9245464\") " pod="openstack/nova-cell0-conductor-0" Dec 15 05:54:12 crc kubenswrapper[4747]: I1215 05:54:12.536675 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl9hc\" (UniqueName: \"kubernetes.io/projected/ead9df5f-294c-464e-b416-743ad9245464-kube-api-access-kl9hc\") pod \"nova-cell0-conductor-0\" (UID: \"ead9df5f-294c-464e-b416-743ad9245464\") " pod="openstack/nova-cell0-conductor-0" Dec 15 05:54:12 crc kubenswrapper[4747]: I1215 05:54:12.536753 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ead9df5f-294c-464e-b416-743ad9245464-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ead9df5f-294c-464e-b416-743ad9245464\") " pod="openstack/nova-cell0-conductor-0" Dec 15 05:54:12 crc kubenswrapper[4747]: I1215 05:54:12.646799 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ead9df5f-294c-464e-b416-743ad9245464-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ead9df5f-294c-464e-b416-743ad9245464\") " pod="openstack/nova-cell0-conductor-0" Dec 15 05:54:12 crc kubenswrapper[4747]: I1215 05:54:12.646886 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kl9hc\" (UniqueName: \"kubernetes.io/projected/ead9df5f-294c-464e-b416-743ad9245464-kube-api-access-kl9hc\") pod \"nova-cell0-conductor-0\" (UID: \"ead9df5f-294c-464e-b416-743ad9245464\") " pod="openstack/nova-cell0-conductor-0" Dec 15 05:54:12 crc kubenswrapper[4747]: I1215 05:54:12.646955 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ead9df5f-294c-464e-b416-743ad9245464-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ead9df5f-294c-464e-b416-743ad9245464\") " pod="openstack/nova-cell0-conductor-0" Dec 15 05:54:12 crc kubenswrapper[4747]: I1215 05:54:12.652154 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ead9df5f-294c-464e-b416-743ad9245464-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ead9df5f-294c-464e-b416-743ad9245464\") " pod="openstack/nova-cell0-conductor-0" Dec 15 05:54:12 crc kubenswrapper[4747]: I1215 05:54:12.653547 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ead9df5f-294c-464e-b416-743ad9245464-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ead9df5f-294c-464e-b416-743ad9245464\") " pod="openstack/nova-cell0-conductor-0" Dec 15 05:54:12 crc kubenswrapper[4747]: I1215 05:54:12.664371 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl9hc\" (UniqueName: \"kubernetes.io/projected/ead9df5f-294c-464e-b416-743ad9245464-kube-api-access-kl9hc\") pod \"nova-cell0-conductor-0\" (UID: \"ead9df5f-294c-464e-b416-743ad9245464\") " pod="openstack/nova-cell0-conductor-0" Dec 15 05:54:12 crc kubenswrapper[4747]: I1215 05:54:12.810732 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 15 05:54:13 crc kubenswrapper[4747]: I1215 05:54:13.272726 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 15 05:54:13 crc kubenswrapper[4747]: W1215 05:54:13.274829 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podead9df5f_294c_464e_b416_743ad9245464.slice/crio-f45222c84265ea3896796b4c8c785c79fac93f60edd5e8751df0b964b29672e9 WatchSource:0}: Error finding container f45222c84265ea3896796b4c8c785c79fac93f60edd5e8751df0b964b29672e9: Status 404 returned error can't find the container with id f45222c84265ea3896796b4c8c785c79fac93f60edd5e8751df0b964b29672e9 Dec 15 05:54:13 crc kubenswrapper[4747]: I1215 05:54:13.329301 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ead9df5f-294c-464e-b416-743ad9245464","Type":"ContainerStarted","Data":"f45222c84265ea3896796b4c8c785c79fac93f60edd5e8751df0b964b29672e9"} Dec 15 05:54:14 crc kubenswrapper[4747]: I1215 05:54:14.338701 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ead9df5f-294c-464e-b416-743ad9245464","Type":"ContainerStarted","Data":"e03f50f118ef71f8619970bce93a44df8be4d4dcb7dab2ee103f531449d3ada7"} Dec 15 05:54:14 crc kubenswrapper[4747]: I1215 05:54:14.339043 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 15 05:54:14 crc kubenswrapper[4747]: I1215 05:54:14.356263 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.356247399 podStartE2EDuration="2.356247399s" podCreationTimestamp="2025-12-15 05:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:54:14.353506075 +0000 UTC m=+1018.050017992" watchObservedRunningTime="2025-12-15 05:54:14.356247399 +0000 UTC m=+1018.052759306" Dec 15 05:54:22 crc kubenswrapper[4747]: I1215 05:54:22.839102 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.438459 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-9xp4w"] Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.439824 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-9xp4w" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.441649 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.446630 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.449245 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-9xp4w"] Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.550974 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.552292 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.555272 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.556739 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv9bg\" (UniqueName: \"kubernetes.io/projected/e9eb96a2-d315-4936-96e4-0be39cf72b0a-kube-api-access-rv9bg\") pod \"nova-cell0-cell-mapping-9xp4w\" (UID: \"e9eb96a2-d315-4936-96e4-0be39cf72b0a\") " pod="openstack/nova-cell0-cell-mapping-9xp4w" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.556966 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9eb96a2-d315-4936-96e4-0be39cf72b0a-scripts\") pod \"nova-cell0-cell-mapping-9xp4w\" (UID: \"e9eb96a2-d315-4936-96e4-0be39cf72b0a\") " pod="openstack/nova-cell0-cell-mapping-9xp4w" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.557027 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9eb96a2-d315-4936-96e4-0be39cf72b0a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-9xp4w\" (UID: \"e9eb96a2-d315-4936-96e4-0be39cf72b0a\") " pod="openstack/nova-cell0-cell-mapping-9xp4w" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.557213 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9eb96a2-d315-4936-96e4-0be39cf72b0a-config-data\") pod \"nova-cell0-cell-mapping-9xp4w\" (UID: \"e9eb96a2-d315-4936-96e4-0be39cf72b0a\") " pod="openstack/nova-cell0-cell-mapping-9xp4w" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.560767 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.612208 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.613468 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.617259 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.629617 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.659203 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9eb96a2-d315-4936-96e4-0be39cf72b0a-config-data\") pod \"nova-cell0-cell-mapping-9xp4w\" (UID: \"e9eb96a2-d315-4936-96e4-0be39cf72b0a\") " pod="openstack/nova-cell0-cell-mapping-9xp4w" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.659357 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv9bg\" (UniqueName: \"kubernetes.io/projected/e9eb96a2-d315-4936-96e4-0be39cf72b0a-kube-api-access-rv9bg\") pod \"nova-cell0-cell-mapping-9xp4w\" (UID: \"e9eb96a2-d315-4936-96e4-0be39cf72b0a\") " pod="openstack/nova-cell0-cell-mapping-9xp4w" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.659388 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa\") " pod="openstack/nova-cell1-novncproxy-0" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.659438 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa\") " pod="openstack/nova-cell1-novncproxy-0" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.659468 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbzlq\" (UniqueName: \"kubernetes.io/projected/0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa-kube-api-access-rbzlq\") pod \"nova-cell1-novncproxy-0\" (UID: \"0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa\") " pod="openstack/nova-cell1-novncproxy-0" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.659537 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9eb96a2-d315-4936-96e4-0be39cf72b0a-scripts\") pod \"nova-cell0-cell-mapping-9xp4w\" (UID: \"e9eb96a2-d315-4936-96e4-0be39cf72b0a\") " pod="openstack/nova-cell0-cell-mapping-9xp4w" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.659580 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9eb96a2-d315-4936-96e4-0be39cf72b0a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-9xp4w\" (UID: \"e9eb96a2-d315-4936-96e4-0be39cf72b0a\") " pod="openstack/nova-cell0-cell-mapping-9xp4w" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.669510 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9eb96a2-d315-4936-96e4-0be39cf72b0a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-9xp4w\" (UID: \"e9eb96a2-d315-4936-96e4-0be39cf72b0a\") " pod="openstack/nova-cell0-cell-mapping-9xp4w" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.672603 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9eb96a2-d315-4936-96e4-0be39cf72b0a-config-data\") pod \"nova-cell0-cell-mapping-9xp4w\" (UID: \"e9eb96a2-d315-4936-96e4-0be39cf72b0a\") " pod="openstack/nova-cell0-cell-mapping-9xp4w" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.694470 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9eb96a2-d315-4936-96e4-0be39cf72b0a-scripts\") pod \"nova-cell0-cell-mapping-9xp4w\" (UID: \"e9eb96a2-d315-4936-96e4-0be39cf72b0a\") " pod="openstack/nova-cell0-cell-mapping-9xp4w" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.699578 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv9bg\" (UniqueName: \"kubernetes.io/projected/e9eb96a2-d315-4936-96e4-0be39cf72b0a-kube-api-access-rv9bg\") pod \"nova-cell0-cell-mapping-9xp4w\" (UID: \"e9eb96a2-d315-4936-96e4-0be39cf72b0a\") " pod="openstack/nova-cell0-cell-mapping-9xp4w" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.730785 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.732752 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.742681 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.748887 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.764658 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/217ecd6d-89af-4db7-a261-ec968737d482-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"217ecd6d-89af-4db7-a261-ec968737d482\") " pod="openstack/nova-scheduler-0" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.764906 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa\") " pod="openstack/nova-cell1-novncproxy-0" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.764989 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa\") " pod="openstack/nova-cell1-novncproxy-0" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.765031 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbzlq\" (UniqueName: \"kubernetes.io/projected/0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa-kube-api-access-rbzlq\") pod \"nova-cell1-novncproxy-0\" (UID: \"0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa\") " pod="openstack/nova-cell1-novncproxy-0" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.765140 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/217ecd6d-89af-4db7-a261-ec968737d482-config-data\") pod \"nova-scheduler-0\" (UID: \"217ecd6d-89af-4db7-a261-ec968737d482\") " pod="openstack/nova-scheduler-0" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.765243 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6brt\" (UniqueName: \"kubernetes.io/projected/217ecd6d-89af-4db7-a261-ec968737d482-kube-api-access-w6brt\") pod \"nova-scheduler-0\" (UID: \"217ecd6d-89af-4db7-a261-ec968737d482\") " pod="openstack/nova-scheduler-0" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.781698 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.794451 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.797599 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.798252 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-9xp4w" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.827213 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa\") " pod="openstack/nova-cell1-novncproxy-0" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.831559 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa\") " pod="openstack/nova-cell1-novncproxy-0" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.873993 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbzlq\" (UniqueName: \"kubernetes.io/projected/0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa-kube-api-access-rbzlq\") pod \"nova-cell1-novncproxy-0\" (UID: \"0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa\") " pod="openstack/nova-cell1-novncproxy-0" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.877751 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2f3889f-c077-4a30-8d64-348930019517-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c2f3889f-c077-4a30-8d64-348930019517\") " pod="openstack/nova-api-0" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.877810 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e28c8575-a66c-445b-af56-e85ef6fb33e5-config-data\") pod \"nova-metadata-0\" (UID: \"e28c8575-a66c-445b-af56-e85ef6fb33e5\") " pod="openstack/nova-metadata-0" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.877853 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e28c8575-a66c-445b-af56-e85ef6fb33e5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e28c8575-a66c-445b-af56-e85ef6fb33e5\") " pod="openstack/nova-metadata-0" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.877891 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/217ecd6d-89af-4db7-a261-ec968737d482-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"217ecd6d-89af-4db7-a261-ec968737d482\") " pod="openstack/nova-scheduler-0" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.877968 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2f3889f-c077-4a30-8d64-348930019517-logs\") pod \"nova-api-0\" (UID: \"c2f3889f-c077-4a30-8d64-348930019517\") " pod="openstack/nova-api-0" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.878047 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stgzz\" (UniqueName: \"kubernetes.io/projected/e28c8575-a66c-445b-af56-e85ef6fb33e5-kube-api-access-stgzz\") pod \"nova-metadata-0\" (UID: \"e28c8575-a66c-445b-af56-e85ef6fb33e5\") " pod="openstack/nova-metadata-0" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.878114 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e28c8575-a66c-445b-af56-e85ef6fb33e5-logs\") pod \"nova-metadata-0\" (UID: \"e28c8575-a66c-445b-af56-e85ef6fb33e5\") " pod="openstack/nova-metadata-0" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.878150 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/217ecd6d-89af-4db7-a261-ec968737d482-config-data\") pod \"nova-scheduler-0\" (UID: \"217ecd6d-89af-4db7-a261-ec968737d482\") " pod="openstack/nova-scheduler-0" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.878174 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2f3889f-c077-4a30-8d64-348930019517-config-data\") pod \"nova-api-0\" (UID: \"c2f3889f-c077-4a30-8d64-348930019517\") " pod="openstack/nova-api-0" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.878208 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twgpf\" (UniqueName: \"kubernetes.io/projected/c2f3889f-c077-4a30-8d64-348930019517-kube-api-access-twgpf\") pod \"nova-api-0\" (UID: \"c2f3889f-c077-4a30-8d64-348930019517\") " pod="openstack/nova-api-0" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.878241 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6brt\" (UniqueName: \"kubernetes.io/projected/217ecd6d-89af-4db7-a261-ec968737d482-kube-api-access-w6brt\") pod \"nova-scheduler-0\" (UID: \"217ecd6d-89af-4db7-a261-ec968737d482\") " pod="openstack/nova-scheduler-0" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.883449 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/217ecd6d-89af-4db7-a261-ec968737d482-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"217ecd6d-89af-4db7-a261-ec968737d482\") " pod="openstack/nova-scheduler-0" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.898579 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6brt\" (UniqueName: \"kubernetes.io/projected/217ecd6d-89af-4db7-a261-ec968737d482-kube-api-access-w6brt\") pod \"nova-scheduler-0\" (UID: \"217ecd6d-89af-4db7-a261-ec968737d482\") " pod="openstack/nova-scheduler-0" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.898636 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.899870 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/217ecd6d-89af-4db7-a261-ec968737d482-config-data\") pod \"nova-scheduler-0\" (UID: \"217ecd6d-89af-4db7-a261-ec968737d482\") " pod="openstack/nova-scheduler-0" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.932106 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.998980 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stgzz\" (UniqueName: \"kubernetes.io/projected/e28c8575-a66c-445b-af56-e85ef6fb33e5-kube-api-access-stgzz\") pod \"nova-metadata-0\" (UID: \"e28c8575-a66c-445b-af56-e85ef6fb33e5\") " pod="openstack/nova-metadata-0" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.999214 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e28c8575-a66c-445b-af56-e85ef6fb33e5-logs\") pod \"nova-metadata-0\" (UID: \"e28c8575-a66c-445b-af56-e85ef6fb33e5\") " pod="openstack/nova-metadata-0" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.999320 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2f3889f-c077-4a30-8d64-348930019517-config-data\") pod \"nova-api-0\" (UID: \"c2f3889f-c077-4a30-8d64-348930019517\") " pod="openstack/nova-api-0" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.999408 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twgpf\" (UniqueName: \"kubernetes.io/projected/c2f3889f-c077-4a30-8d64-348930019517-kube-api-access-twgpf\") pod \"nova-api-0\" (UID: \"c2f3889f-c077-4a30-8d64-348930019517\") " pod="openstack/nova-api-0" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.999621 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2f3889f-c077-4a30-8d64-348930019517-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c2f3889f-c077-4a30-8d64-348930019517\") " pod="openstack/nova-api-0" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.999690 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e28c8575-a66c-445b-af56-e85ef6fb33e5-config-data\") pod \"nova-metadata-0\" (UID: \"e28c8575-a66c-445b-af56-e85ef6fb33e5\") " pod="openstack/nova-metadata-0" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.999779 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e28c8575-a66c-445b-af56-e85ef6fb33e5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e28c8575-a66c-445b-af56-e85ef6fb33e5\") " pod="openstack/nova-metadata-0" Dec 15 05:54:23 crc kubenswrapper[4747]: I1215 05:54:23.999955 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2f3889f-c077-4a30-8d64-348930019517-logs\") pod \"nova-api-0\" (UID: \"c2f3889f-c077-4a30-8d64-348930019517\") " pod="openstack/nova-api-0" Dec 15 05:54:24 crc kubenswrapper[4747]: I1215 05:54:24.000547 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2f3889f-c077-4a30-8d64-348930019517-logs\") pod \"nova-api-0\" (UID: \"c2f3889f-c077-4a30-8d64-348930019517\") " pod="openstack/nova-api-0" Dec 15 05:54:24 crc kubenswrapper[4747]: I1215 05:54:24.006265 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e28c8575-a66c-445b-af56-e85ef6fb33e5-logs\") pod \"nova-metadata-0\" (UID: \"e28c8575-a66c-445b-af56-e85ef6fb33e5\") " pod="openstack/nova-metadata-0" Dec 15 05:54:24 crc kubenswrapper[4747]: I1215 05:54:24.013569 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2f3889f-c077-4a30-8d64-348930019517-config-data\") pod \"nova-api-0\" (UID: \"c2f3889f-c077-4a30-8d64-348930019517\") " pod="openstack/nova-api-0" Dec 15 05:54:24 crc kubenswrapper[4747]: I1215 05:54:24.014260 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e28c8575-a66c-445b-af56-e85ef6fb33e5-config-data\") pod \"nova-metadata-0\" (UID: \"e28c8575-a66c-445b-af56-e85ef6fb33e5\") " pod="openstack/nova-metadata-0" Dec 15 05:54:24 crc kubenswrapper[4747]: I1215 05:54:24.018554 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78556b4b47-jdvwf"] Dec 15 05:54:24 crc kubenswrapper[4747]: I1215 05:54:24.018654 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e28c8575-a66c-445b-af56-e85ef6fb33e5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e28c8575-a66c-445b-af56-e85ef6fb33e5\") " pod="openstack/nova-metadata-0" Dec 15 05:54:24 crc kubenswrapper[4747]: I1215 05:54:24.020149 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78556b4b47-jdvwf" Dec 15 05:54:24 crc kubenswrapper[4747]: I1215 05:54:24.026666 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twgpf\" (UniqueName: \"kubernetes.io/projected/c2f3889f-c077-4a30-8d64-348930019517-kube-api-access-twgpf\") pod \"nova-api-0\" (UID: \"c2f3889f-c077-4a30-8d64-348930019517\") " pod="openstack/nova-api-0" Dec 15 05:54:24 crc kubenswrapper[4747]: I1215 05:54:24.026693 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2f3889f-c077-4a30-8d64-348930019517-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c2f3889f-c077-4a30-8d64-348930019517\") " pod="openstack/nova-api-0" Dec 15 05:54:24 crc kubenswrapper[4747]: I1215 05:54:24.033344 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stgzz\" (UniqueName: \"kubernetes.io/projected/e28c8575-a66c-445b-af56-e85ef6fb33e5-kube-api-access-stgzz\") pod \"nova-metadata-0\" (UID: \"e28c8575-a66c-445b-af56-e85ef6fb33e5\") " pod="openstack/nova-metadata-0" Dec 15 05:54:24 crc kubenswrapper[4747]: I1215 05:54:24.046703 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78556b4b47-jdvwf"] Dec 15 05:54:24 crc kubenswrapper[4747]: I1215 05:54:24.095991 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 15 05:54:24 crc kubenswrapper[4747]: I1215 05:54:24.171994 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 15 05:54:24 crc kubenswrapper[4747]: I1215 05:54:24.205450 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxmq8\" (UniqueName: \"kubernetes.io/projected/d306a755-b503-4799-b23b-05b1afe561eb-kube-api-access-hxmq8\") pod \"dnsmasq-dns-78556b4b47-jdvwf\" (UID: \"d306a755-b503-4799-b23b-05b1afe561eb\") " pod="openstack/dnsmasq-dns-78556b4b47-jdvwf" Dec 15 05:54:24 crc kubenswrapper[4747]: I1215 05:54:24.205605 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d306a755-b503-4799-b23b-05b1afe561eb-ovsdbserver-sb\") pod \"dnsmasq-dns-78556b4b47-jdvwf\" (UID: \"d306a755-b503-4799-b23b-05b1afe561eb\") " pod="openstack/dnsmasq-dns-78556b4b47-jdvwf" Dec 15 05:54:24 crc kubenswrapper[4747]: I1215 05:54:24.205735 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d306a755-b503-4799-b23b-05b1afe561eb-dns-swift-storage-0\") pod \"dnsmasq-dns-78556b4b47-jdvwf\" (UID: \"d306a755-b503-4799-b23b-05b1afe561eb\") " pod="openstack/dnsmasq-dns-78556b4b47-jdvwf" Dec 15 05:54:24 crc kubenswrapper[4747]: I1215 05:54:24.205806 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d306a755-b503-4799-b23b-05b1afe561eb-config\") pod \"dnsmasq-dns-78556b4b47-jdvwf\" (UID: \"d306a755-b503-4799-b23b-05b1afe561eb\") " pod="openstack/dnsmasq-dns-78556b4b47-jdvwf" Dec 15 05:54:24 crc kubenswrapper[4747]: I1215 05:54:24.205896 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d306a755-b503-4799-b23b-05b1afe561eb-ovsdbserver-nb\") pod \"dnsmasq-dns-78556b4b47-jdvwf\" (UID: \"d306a755-b503-4799-b23b-05b1afe561eb\") " pod="openstack/dnsmasq-dns-78556b4b47-jdvwf" Dec 15 05:54:24 crc kubenswrapper[4747]: I1215 05:54:24.205979 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d306a755-b503-4799-b23b-05b1afe561eb-dns-svc\") pod \"dnsmasq-dns-78556b4b47-jdvwf\" (UID: \"d306a755-b503-4799-b23b-05b1afe561eb\") " pod="openstack/dnsmasq-dns-78556b4b47-jdvwf" Dec 15 05:54:24 crc kubenswrapper[4747]: I1215 05:54:24.215618 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 15 05:54:24 crc kubenswrapper[4747]: I1215 05:54:24.308312 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d306a755-b503-4799-b23b-05b1afe561eb-dns-swift-storage-0\") pod \"dnsmasq-dns-78556b4b47-jdvwf\" (UID: \"d306a755-b503-4799-b23b-05b1afe561eb\") " pod="openstack/dnsmasq-dns-78556b4b47-jdvwf" Dec 15 05:54:24 crc kubenswrapper[4747]: I1215 05:54:24.308368 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d306a755-b503-4799-b23b-05b1afe561eb-config\") pod \"dnsmasq-dns-78556b4b47-jdvwf\" (UID: \"d306a755-b503-4799-b23b-05b1afe561eb\") " pod="openstack/dnsmasq-dns-78556b4b47-jdvwf" Dec 15 05:54:24 crc kubenswrapper[4747]: I1215 05:54:24.308486 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d306a755-b503-4799-b23b-05b1afe561eb-ovsdbserver-nb\") pod \"dnsmasq-dns-78556b4b47-jdvwf\" (UID: \"d306a755-b503-4799-b23b-05b1afe561eb\") " pod="openstack/dnsmasq-dns-78556b4b47-jdvwf" Dec 15 05:54:24 crc kubenswrapper[4747]: I1215 05:54:24.308538 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d306a755-b503-4799-b23b-05b1afe561eb-dns-svc\") pod \"dnsmasq-dns-78556b4b47-jdvwf\" (UID: \"d306a755-b503-4799-b23b-05b1afe561eb\") " pod="openstack/dnsmasq-dns-78556b4b47-jdvwf" Dec 15 05:54:24 crc kubenswrapper[4747]: I1215 05:54:24.308647 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxmq8\" (UniqueName: \"kubernetes.io/projected/d306a755-b503-4799-b23b-05b1afe561eb-kube-api-access-hxmq8\") pod \"dnsmasq-dns-78556b4b47-jdvwf\" (UID: \"d306a755-b503-4799-b23b-05b1afe561eb\") " pod="openstack/dnsmasq-dns-78556b4b47-jdvwf" Dec 15 05:54:24 crc kubenswrapper[4747]: I1215 05:54:24.308737 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d306a755-b503-4799-b23b-05b1afe561eb-ovsdbserver-sb\") pod \"dnsmasq-dns-78556b4b47-jdvwf\" (UID: \"d306a755-b503-4799-b23b-05b1afe561eb\") " pod="openstack/dnsmasq-dns-78556b4b47-jdvwf" Dec 15 05:54:24 crc kubenswrapper[4747]: I1215 05:54:24.309789 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d306a755-b503-4799-b23b-05b1afe561eb-ovsdbserver-sb\") pod \"dnsmasq-dns-78556b4b47-jdvwf\" (UID: \"d306a755-b503-4799-b23b-05b1afe561eb\") " pod="openstack/dnsmasq-dns-78556b4b47-jdvwf" Dec 15 05:54:24 crc kubenswrapper[4747]: I1215 05:54:24.309850 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d306a755-b503-4799-b23b-05b1afe561eb-dns-swift-storage-0\") pod \"dnsmasq-dns-78556b4b47-jdvwf\" (UID: \"d306a755-b503-4799-b23b-05b1afe561eb\") " pod="openstack/dnsmasq-dns-78556b4b47-jdvwf" Dec 15 05:54:24 crc kubenswrapper[4747]: I1215 05:54:24.310733 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d306a755-b503-4799-b23b-05b1afe561eb-config\") pod \"dnsmasq-dns-78556b4b47-jdvwf\" (UID: \"d306a755-b503-4799-b23b-05b1afe561eb\") " pod="openstack/dnsmasq-dns-78556b4b47-jdvwf" Dec 15 05:54:24 crc kubenswrapper[4747]: I1215 05:54:24.316069 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d306a755-b503-4799-b23b-05b1afe561eb-ovsdbserver-nb\") pod \"dnsmasq-dns-78556b4b47-jdvwf\" (UID: \"d306a755-b503-4799-b23b-05b1afe561eb\") " pod="openstack/dnsmasq-dns-78556b4b47-jdvwf" Dec 15 05:54:24 crc kubenswrapper[4747]: I1215 05:54:24.321590 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d306a755-b503-4799-b23b-05b1afe561eb-dns-svc\") pod \"dnsmasq-dns-78556b4b47-jdvwf\" (UID: \"d306a755-b503-4799-b23b-05b1afe561eb\") " pod="openstack/dnsmasq-dns-78556b4b47-jdvwf" Dec 15 05:54:24 crc kubenswrapper[4747]: I1215 05:54:24.329192 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-9xp4w"] Dec 15 05:54:24 crc kubenswrapper[4747]: I1215 05:54:24.338399 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxmq8\" (UniqueName: \"kubernetes.io/projected/d306a755-b503-4799-b23b-05b1afe561eb-kube-api-access-hxmq8\") pod \"dnsmasq-dns-78556b4b47-jdvwf\" (UID: \"d306a755-b503-4799-b23b-05b1afe561eb\") " pod="openstack/dnsmasq-dns-78556b4b47-jdvwf" Dec 15 05:54:24 crc kubenswrapper[4747]: I1215 05:54:24.362169 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78556b4b47-jdvwf" Dec 15 05:54:24 crc kubenswrapper[4747]: I1215 05:54:24.456035 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 15 05:54:24 crc kubenswrapper[4747]: I1215 05:54:24.457718 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-9xp4w" event={"ID":"e9eb96a2-d315-4936-96e4-0be39cf72b0a","Type":"ContainerStarted","Data":"e3a0c1493d6f1922804f3296fa84b185c1668904cda0f10b3f6dc11463b76c04"} Dec 15 05:54:24 crc kubenswrapper[4747]: I1215 05:54:24.614855 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 15 05:54:24 crc kubenswrapper[4747]: W1215 05:54:24.624768 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2f3889f_c077_4a30_8d64_348930019517.slice/crio-44b85910aff7f73fd236fe2fc84dfddea22b5c2ebf97104e0141ae7115668843 WatchSource:0}: Error finding container 44b85910aff7f73fd236fe2fc84dfddea22b5c2ebf97104e0141ae7115668843: Status 404 returned error can't find the container with id 44b85910aff7f73fd236fe2fc84dfddea22b5c2ebf97104e0141ae7115668843 Dec 15 05:54:24 crc kubenswrapper[4747]: I1215 05:54:24.734358 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 15 05:54:24 crc kubenswrapper[4747]: W1215 05:54:24.735434 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode28c8575_a66c_445b_af56_e85ef6fb33e5.slice/crio-6e9341a4d392cddd72a486bc619480b3f41798ba8914367b1b419623462cc5df WatchSource:0}: Error finding container 6e9341a4d392cddd72a486bc619480b3f41798ba8914367b1b419623462cc5df: Status 404 returned error can't find the container with id 6e9341a4d392cddd72a486bc619480b3f41798ba8914367b1b419623462cc5df Dec 15 05:54:24 crc kubenswrapper[4747]: W1215 05:54:24.739260 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c25ca5b_7708_4e2f_a5b7_6e37b50d44aa.slice/crio-503580dd478d443db2f03d270377ee0c8f27c3305769e6826aaf7e547a2a0fad WatchSource:0}: Error finding container 503580dd478d443db2f03d270377ee0c8f27c3305769e6826aaf7e547a2a0fad: Status 404 returned error can't find the container with id 503580dd478d443db2f03d270377ee0c8f27c3305769e6826aaf7e547a2a0fad Dec 15 05:54:24 crc kubenswrapper[4747]: I1215 05:54:24.748458 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 15 05:54:24 crc kubenswrapper[4747]: I1215 05:54:24.836689 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-p84db"] Dec 15 05:54:24 crc kubenswrapper[4747]: I1215 05:54:24.838025 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-p84db" Dec 15 05:54:24 crc kubenswrapper[4747]: I1215 05:54:24.840010 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Dec 15 05:54:24 crc kubenswrapper[4747]: I1215 05:54:24.840357 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 15 05:54:24 crc kubenswrapper[4747]: I1215 05:54:24.853850 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-p84db"] Dec 15 05:54:24 crc kubenswrapper[4747]: W1215 05:54:24.895277 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd306a755_b503_4799_b23b_05b1afe561eb.slice/crio-6ff07bfbad3b777e62a33a9a17790054fff6281cf300040031c0593a5062e743 WatchSource:0}: Error finding container 6ff07bfbad3b777e62a33a9a17790054fff6281cf300040031c0593a5062e743: Status 404 returned error can't find the container with id 6ff07bfbad3b777e62a33a9a17790054fff6281cf300040031c0593a5062e743 Dec 15 05:54:24 crc kubenswrapper[4747]: I1215 05:54:24.896279 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78556b4b47-jdvwf"] Dec 15 05:54:24 crc kubenswrapper[4747]: I1215 05:54:24.926418 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6-scripts\") pod \"nova-cell1-conductor-db-sync-p84db\" (UID: \"edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6\") " pod="openstack/nova-cell1-conductor-db-sync-p84db" Dec 15 05:54:24 crc kubenswrapper[4747]: I1215 05:54:24.926468 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6-config-data\") pod \"nova-cell1-conductor-db-sync-p84db\" (UID: \"edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6\") " pod="openstack/nova-cell1-conductor-db-sync-p84db" Dec 15 05:54:24 crc kubenswrapper[4747]: I1215 05:54:24.926576 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnm2d\" (UniqueName: \"kubernetes.io/projected/edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6-kube-api-access-hnm2d\") pod \"nova-cell1-conductor-db-sync-p84db\" (UID: \"edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6\") " pod="openstack/nova-cell1-conductor-db-sync-p84db" Dec 15 05:54:24 crc kubenswrapper[4747]: I1215 05:54:24.926629 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-p84db\" (UID: \"edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6\") " pod="openstack/nova-cell1-conductor-db-sync-p84db" Dec 15 05:54:25 crc kubenswrapper[4747]: I1215 05:54:25.029450 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6-scripts\") pod \"nova-cell1-conductor-db-sync-p84db\" (UID: \"edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6\") " pod="openstack/nova-cell1-conductor-db-sync-p84db" Dec 15 05:54:25 crc kubenswrapper[4747]: I1215 05:54:25.029626 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6-config-data\") pod \"nova-cell1-conductor-db-sync-p84db\" (UID: \"edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6\") " pod="openstack/nova-cell1-conductor-db-sync-p84db" Dec 15 05:54:25 crc kubenswrapper[4747]: I1215 05:54:25.029824 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnm2d\" (UniqueName: \"kubernetes.io/projected/edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6-kube-api-access-hnm2d\") pod \"nova-cell1-conductor-db-sync-p84db\" (UID: \"edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6\") " pod="openstack/nova-cell1-conductor-db-sync-p84db" Dec 15 05:54:25 crc kubenswrapper[4747]: I1215 05:54:25.031726 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-p84db\" (UID: \"edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6\") " pod="openstack/nova-cell1-conductor-db-sync-p84db" Dec 15 05:54:25 crc kubenswrapper[4747]: I1215 05:54:25.033588 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6-config-data\") pod \"nova-cell1-conductor-db-sync-p84db\" (UID: \"edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6\") " pod="openstack/nova-cell1-conductor-db-sync-p84db" Dec 15 05:54:25 crc kubenswrapper[4747]: I1215 05:54:25.034245 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6-scripts\") pod \"nova-cell1-conductor-db-sync-p84db\" (UID: \"edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6\") " pod="openstack/nova-cell1-conductor-db-sync-p84db" Dec 15 05:54:25 crc kubenswrapper[4747]: I1215 05:54:25.037273 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-p84db\" (UID: \"edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6\") " pod="openstack/nova-cell1-conductor-db-sync-p84db" Dec 15 05:54:25 crc kubenswrapper[4747]: I1215 05:54:25.047370 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnm2d\" (UniqueName: \"kubernetes.io/projected/edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6-kube-api-access-hnm2d\") pod \"nova-cell1-conductor-db-sync-p84db\" (UID: \"edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6\") " pod="openstack/nova-cell1-conductor-db-sync-p84db" Dec 15 05:54:25 crc kubenswrapper[4747]: I1215 05:54:25.159984 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-p84db" Dec 15 05:54:25 crc kubenswrapper[4747]: I1215 05:54:25.476618 4747 generic.go:334] "Generic (PLEG): container finished" podID="d306a755-b503-4799-b23b-05b1afe561eb" containerID="f3301224d1e44581d89a204b56b74b2f339dc1b29dbc9669ce366463d55391d3" exitCode=0 Dec 15 05:54:25 crc kubenswrapper[4747]: I1215 05:54:25.476770 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78556b4b47-jdvwf" event={"ID":"d306a755-b503-4799-b23b-05b1afe561eb","Type":"ContainerDied","Data":"f3301224d1e44581d89a204b56b74b2f339dc1b29dbc9669ce366463d55391d3"} Dec 15 05:54:25 crc kubenswrapper[4747]: I1215 05:54:25.477192 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78556b4b47-jdvwf" event={"ID":"d306a755-b503-4799-b23b-05b1afe561eb","Type":"ContainerStarted","Data":"6ff07bfbad3b777e62a33a9a17790054fff6281cf300040031c0593a5062e743"} Dec 15 05:54:25 crc kubenswrapper[4747]: I1215 05:54:25.479787 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"217ecd6d-89af-4db7-a261-ec968737d482","Type":"ContainerStarted","Data":"8e0fe39aa8280b7316b43b161a6b3b877f7ab9a3cbaebeb8825674315f867644"} Dec 15 05:54:25 crc kubenswrapper[4747]: I1215 05:54:25.482094 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c2f3889f-c077-4a30-8d64-348930019517","Type":"ContainerStarted","Data":"44b85910aff7f73fd236fe2fc84dfddea22b5c2ebf97104e0141ae7115668843"} Dec 15 05:54:25 crc kubenswrapper[4747]: I1215 05:54:25.483805 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e28c8575-a66c-445b-af56-e85ef6fb33e5","Type":"ContainerStarted","Data":"6e9341a4d392cddd72a486bc619480b3f41798ba8914367b1b419623462cc5df"} Dec 15 05:54:25 crc kubenswrapper[4747]: I1215 05:54:25.486657 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa","Type":"ContainerStarted","Data":"503580dd478d443db2f03d270377ee0c8f27c3305769e6826aaf7e547a2a0fad"} Dec 15 05:54:25 crc kubenswrapper[4747]: I1215 05:54:25.490438 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-9xp4w" event={"ID":"e9eb96a2-d315-4936-96e4-0be39cf72b0a","Type":"ContainerStarted","Data":"e6abf5d627df5f52b5d6a5057119f4e93d31d86d2d2362b069708672cca46d44"} Dec 15 05:54:25 crc kubenswrapper[4747]: I1215 05:54:25.512421 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-9xp4w" podStartSLOduration=2.512404276 podStartE2EDuration="2.512404276s" podCreationTimestamp="2025-12-15 05:54:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:54:25.50983264 +0000 UTC m=+1029.206344557" watchObservedRunningTime="2025-12-15 05:54:25.512404276 +0000 UTC m=+1029.208916193" Dec 15 05:54:25 crc kubenswrapper[4747]: I1215 05:54:25.581667 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-p84db"] Dec 15 05:54:25 crc kubenswrapper[4747]: W1215 05:54:25.589117 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podedecc2ab_d1b8_4dac_bc1c_825e65c0aaa6.slice/crio-fa68624167910240adec852a6b5d777e2cf959f3e597644c6e690097c3757dee WatchSource:0}: Error finding container fa68624167910240adec852a6b5d777e2cf959f3e597644c6e690097c3757dee: Status 404 returned error can't find the container with id fa68624167910240adec852a6b5d777e2cf959f3e597644c6e690097c3757dee Dec 15 05:54:26 crc kubenswrapper[4747]: I1215 05:54:26.498942 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-p84db" event={"ID":"edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6","Type":"ContainerStarted","Data":"3d1a741cc555aac767160793911a42f77314b91f4f076166844e077632d56197"} Dec 15 05:54:26 crc kubenswrapper[4747]: I1215 05:54:26.499196 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-p84db" event={"ID":"edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6","Type":"ContainerStarted","Data":"fa68624167910240adec852a6b5d777e2cf959f3e597644c6e690097c3757dee"} Dec 15 05:54:26 crc kubenswrapper[4747]: I1215 05:54:26.504786 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78556b4b47-jdvwf" event={"ID":"d306a755-b503-4799-b23b-05b1afe561eb","Type":"ContainerStarted","Data":"a08da45d3be84d6cae5e5d6b0a9fa865e095b6cc5f5001b4ccd2c076007f6fbb"} Dec 15 05:54:26 crc kubenswrapper[4747]: I1215 05:54:26.504811 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78556b4b47-jdvwf" Dec 15 05:54:26 crc kubenswrapper[4747]: I1215 05:54:26.514458 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-p84db" podStartSLOduration=2.514448312 podStartE2EDuration="2.514448312s" podCreationTimestamp="2025-12-15 05:54:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:54:26.51290534 +0000 UTC m=+1030.209417257" watchObservedRunningTime="2025-12-15 05:54:26.514448312 +0000 UTC m=+1030.210960229" Dec 15 05:54:26 crc kubenswrapper[4747]: I1215 05:54:26.657683 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78556b4b47-jdvwf" podStartSLOduration=3.657637605 podStartE2EDuration="3.657637605s" podCreationTimestamp="2025-12-15 05:54:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:54:26.548619179 +0000 UTC m=+1030.245131096" watchObservedRunningTime="2025-12-15 05:54:26.657637605 +0000 UTC m=+1030.354149523" Dec 15 05:54:27 crc kubenswrapper[4747]: I1215 05:54:27.520050 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"217ecd6d-89af-4db7-a261-ec968737d482","Type":"ContainerStarted","Data":"fc5b2f3d1f6b8c2f84624343838b868dc02abc52706fc9032df4d871b958bf38"} Dec 15 05:54:27 crc kubenswrapper[4747]: I1215 05:54:27.543701 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.570760413 podStartE2EDuration="4.543681265s" podCreationTimestamp="2025-12-15 05:54:23 +0000 UTC" firstStartedPulling="2025-12-15 05:54:24.486032674 +0000 UTC m=+1028.182544580" lastFinishedPulling="2025-12-15 05:54:26.458953515 +0000 UTC m=+1030.155465432" observedRunningTime="2025-12-15 05:54:27.541588991 +0000 UTC m=+1031.238100908" watchObservedRunningTime="2025-12-15 05:54:27.543681265 +0000 UTC m=+1031.240193182" Dec 15 05:54:28 crc kubenswrapper[4747]: I1215 05:54:28.192863 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 15 05:54:28 crc kubenswrapper[4747]: I1215 05:54:28.205436 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 15 05:54:28 crc kubenswrapper[4747]: I1215 05:54:28.547020 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c2f3889f-c077-4a30-8d64-348930019517","Type":"ContainerStarted","Data":"e444f11f91d05c08e98d60b9ba428826f3608d6b6ca60f3ad4f691930574b6bd"} Dec 15 05:54:28 crc kubenswrapper[4747]: I1215 05:54:28.547084 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c2f3889f-c077-4a30-8d64-348930019517","Type":"ContainerStarted","Data":"ca0bb43cac36d0e0529579796ba3cc4fe5536d1ed035e618cced2014ebc86f8f"} Dec 15 05:54:28 crc kubenswrapper[4747]: I1215 05:54:28.550277 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e28c8575-a66c-445b-af56-e85ef6fb33e5","Type":"ContainerStarted","Data":"a21b9bf3214b396160f407e2278ee613bd608f0a84f229474c787509925d25c2"} Dec 15 05:54:28 crc kubenswrapper[4747]: I1215 05:54:28.550329 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e28c8575-a66c-445b-af56-e85ef6fb33e5","Type":"ContainerStarted","Data":"eca3930d718438dbe8faf7feb421e80887a796f322068c067949e3b2500db5da"} Dec 15 05:54:28 crc kubenswrapper[4747]: I1215 05:54:28.552386 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa","Type":"ContainerStarted","Data":"a1fc54db2d4c7ce5dd2d9fb31fcca8703a4ffcbfae6a8d40a46b0a5793371591"} Dec 15 05:54:28 crc kubenswrapper[4747]: I1215 05:54:28.571662 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.526961171 podStartE2EDuration="5.571646895s" podCreationTimestamp="2025-12-15 05:54:23 +0000 UTC" firstStartedPulling="2025-12-15 05:54:24.627647817 +0000 UTC m=+1028.324159734" lastFinishedPulling="2025-12-15 05:54:27.672333542 +0000 UTC m=+1031.368845458" observedRunningTime="2025-12-15 05:54:28.562533157 +0000 UTC m=+1032.259045075" watchObservedRunningTime="2025-12-15 05:54:28.571646895 +0000 UTC m=+1032.268158813" Dec 15 05:54:28 crc kubenswrapper[4747]: I1215 05:54:28.585724 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.660680105 podStartE2EDuration="5.585712496s" podCreationTimestamp="2025-12-15 05:54:23 +0000 UTC" firstStartedPulling="2025-12-15 05:54:24.742523186 +0000 UTC m=+1028.439035104" lastFinishedPulling="2025-12-15 05:54:27.667555577 +0000 UTC m=+1031.364067495" observedRunningTime="2025-12-15 05:54:28.576179438 +0000 UTC m=+1032.272691355" watchObservedRunningTime="2025-12-15 05:54:28.585712496 +0000 UTC m=+1032.282224413" Dec 15 05:54:28 crc kubenswrapper[4747]: I1215 05:54:28.605382 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.67736509 podStartE2EDuration="5.605361885s" podCreationTimestamp="2025-12-15 05:54:23 +0000 UTC" firstStartedPulling="2025-12-15 05:54:24.741313642 +0000 UTC m=+1028.437825559" lastFinishedPulling="2025-12-15 05:54:27.669310436 +0000 UTC m=+1031.365822354" observedRunningTime="2025-12-15 05:54:28.592593334 +0000 UTC m=+1032.289105251" watchObservedRunningTime="2025-12-15 05:54:28.605361885 +0000 UTC m=+1032.301873802" Dec 15 05:54:28 crc kubenswrapper[4747]: I1215 05:54:28.865346 4747 patch_prober.go:28] interesting pod/machine-config-daemon-nldtn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 15 05:54:28 crc kubenswrapper[4747]: I1215 05:54:28.865426 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 15 05:54:28 crc kubenswrapper[4747]: I1215 05:54:28.933759 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 15 05:54:29 crc kubenswrapper[4747]: I1215 05:54:29.172632 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 15 05:54:29 crc kubenswrapper[4747]: I1215 05:54:29.216463 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 15 05:54:29 crc kubenswrapper[4747]: I1215 05:54:29.216512 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 15 05:54:29 crc kubenswrapper[4747]: I1215 05:54:29.563291 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://a1fc54db2d4c7ce5dd2d9fb31fcca8703a4ffcbfae6a8d40a46b0a5793371591" gracePeriod=30 Dec 15 05:54:29 crc kubenswrapper[4747]: I1215 05:54:29.564081 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e28c8575-a66c-445b-af56-e85ef6fb33e5" containerName="nova-metadata-metadata" containerID="cri-o://a21b9bf3214b396160f407e2278ee613bd608f0a84f229474c787509925d25c2" gracePeriod=30 Dec 15 05:54:29 crc kubenswrapper[4747]: I1215 05:54:29.564106 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e28c8575-a66c-445b-af56-e85ef6fb33e5" containerName="nova-metadata-log" containerID="cri-o://eca3930d718438dbe8faf7feb421e80887a796f322068c067949e3b2500db5da" gracePeriod=30 Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.129096 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.160906 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.257739 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e28c8575-a66c-445b-af56-e85ef6fb33e5-logs\") pod \"e28c8575-a66c-445b-af56-e85ef6fb33e5\" (UID: \"e28c8575-a66c-445b-af56-e85ef6fb33e5\") " Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.257941 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e28c8575-a66c-445b-af56-e85ef6fb33e5-config-data\") pod \"e28c8575-a66c-445b-af56-e85ef6fb33e5\" (UID: \"e28c8575-a66c-445b-af56-e85ef6fb33e5\") " Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.258143 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e28c8575-a66c-445b-af56-e85ef6fb33e5-combined-ca-bundle\") pod \"e28c8575-a66c-445b-af56-e85ef6fb33e5\" (UID: \"e28c8575-a66c-445b-af56-e85ef6fb33e5\") " Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.258480 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stgzz\" (UniqueName: \"kubernetes.io/projected/e28c8575-a66c-445b-af56-e85ef6fb33e5-kube-api-access-stgzz\") pod \"e28c8575-a66c-445b-af56-e85ef6fb33e5\" (UID: \"e28c8575-a66c-445b-af56-e85ef6fb33e5\") " Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.258477 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e28c8575-a66c-445b-af56-e85ef6fb33e5-logs" (OuterVolumeSpecName: "logs") pod "e28c8575-a66c-445b-af56-e85ef6fb33e5" (UID: "e28c8575-a66c-445b-af56-e85ef6fb33e5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.260482 4747 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e28c8575-a66c-445b-af56-e85ef6fb33e5-logs\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.265336 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e28c8575-a66c-445b-af56-e85ef6fb33e5-kube-api-access-stgzz" (OuterVolumeSpecName: "kube-api-access-stgzz") pod "e28c8575-a66c-445b-af56-e85ef6fb33e5" (UID: "e28c8575-a66c-445b-af56-e85ef6fb33e5"). InnerVolumeSpecName "kube-api-access-stgzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.286189 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e28c8575-a66c-445b-af56-e85ef6fb33e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e28c8575-a66c-445b-af56-e85ef6fb33e5" (UID: "e28c8575-a66c-445b-af56-e85ef6fb33e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.286781 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e28c8575-a66c-445b-af56-e85ef6fb33e5-config-data" (OuterVolumeSpecName: "config-data") pod "e28c8575-a66c-445b-af56-e85ef6fb33e5" (UID: "e28c8575-a66c-445b-af56-e85ef6fb33e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.361522 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa-combined-ca-bundle\") pod \"0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa\" (UID: \"0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa\") " Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.361590 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbzlq\" (UniqueName: \"kubernetes.io/projected/0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa-kube-api-access-rbzlq\") pod \"0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa\" (UID: \"0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa\") " Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.362023 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa-config-data\") pod \"0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa\" (UID: \"0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa\") " Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.362772 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stgzz\" (UniqueName: \"kubernetes.io/projected/e28c8575-a66c-445b-af56-e85ef6fb33e5-kube-api-access-stgzz\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.362798 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e28c8575-a66c-445b-af56-e85ef6fb33e5-config-data\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.362813 4747 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e28c8575-a66c-445b-af56-e85ef6fb33e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.364987 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa-kube-api-access-rbzlq" (OuterVolumeSpecName: "kube-api-access-rbzlq") pod "0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa" (UID: "0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa"). InnerVolumeSpecName "kube-api-access-rbzlq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.384648 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa-config-data" (OuterVolumeSpecName: "config-data") pod "0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa" (UID: "0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.385055 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa" (UID: "0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.465208 4747 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.465475 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbzlq\" (UniqueName: \"kubernetes.io/projected/0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa-kube-api-access-rbzlq\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.465492 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa-config-data\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.575536 4747 generic.go:334] "Generic (PLEG): container finished" podID="e28c8575-a66c-445b-af56-e85ef6fb33e5" containerID="a21b9bf3214b396160f407e2278ee613bd608f0a84f229474c787509925d25c2" exitCode=0 Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.575742 4747 generic.go:334] "Generic (PLEG): container finished" podID="e28c8575-a66c-445b-af56-e85ef6fb33e5" containerID="eca3930d718438dbe8faf7feb421e80887a796f322068c067949e3b2500db5da" exitCode=143 Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.575659 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e28c8575-a66c-445b-af56-e85ef6fb33e5","Type":"ContainerDied","Data":"a21b9bf3214b396160f407e2278ee613bd608f0a84f229474c787509925d25c2"} Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.575689 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.575867 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e28c8575-a66c-445b-af56-e85ef6fb33e5","Type":"ContainerDied","Data":"eca3930d718438dbe8faf7feb421e80887a796f322068c067949e3b2500db5da"} Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.575913 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e28c8575-a66c-445b-af56-e85ef6fb33e5","Type":"ContainerDied","Data":"6e9341a4d392cddd72a486bc619480b3f41798ba8914367b1b419623462cc5df"} Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.575965 4747 scope.go:117] "RemoveContainer" containerID="a21b9bf3214b396160f407e2278ee613bd608f0a84f229474c787509925d25c2" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.577867 4747 generic.go:334] "Generic (PLEG): container finished" podID="0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa" containerID="a1fc54db2d4c7ce5dd2d9fb31fcca8703a4ffcbfae6a8d40a46b0a5793371591" exitCode=0 Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.577951 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa","Type":"ContainerDied","Data":"a1fc54db2d4c7ce5dd2d9fb31fcca8703a4ffcbfae6a8d40a46b0a5793371591"} Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.577980 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa","Type":"ContainerDied","Data":"503580dd478d443db2f03d270377ee0c8f27c3305769e6826aaf7e547a2a0fad"} Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.577902 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.580598 4747 generic.go:334] "Generic (PLEG): container finished" podID="e9eb96a2-d315-4936-96e4-0be39cf72b0a" containerID="e6abf5d627df5f52b5d6a5057119f4e93d31d86d2d2362b069708672cca46d44" exitCode=0 Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.580713 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-9xp4w" event={"ID":"e9eb96a2-d315-4936-96e4-0be39cf72b0a","Type":"ContainerDied","Data":"e6abf5d627df5f52b5d6a5057119f4e93d31d86d2d2362b069708672cca46d44"} Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.582308 4747 generic.go:334] "Generic (PLEG): container finished" podID="edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6" containerID="3d1a741cc555aac767160793911a42f77314b91f4f076166844e077632d56197" exitCode=0 Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.582367 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-p84db" event={"ID":"edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6","Type":"ContainerDied","Data":"3d1a741cc555aac767160793911a42f77314b91f4f076166844e077632d56197"} Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.611055 4747 scope.go:117] "RemoveContainer" containerID="eca3930d718438dbe8faf7feb421e80887a796f322068c067949e3b2500db5da" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.642611 4747 scope.go:117] "RemoveContainer" containerID="a21b9bf3214b396160f407e2278ee613bd608f0a84f229474c787509925d25c2" Dec 15 05:54:30 crc kubenswrapper[4747]: E1215 05:54:30.644978 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a21b9bf3214b396160f407e2278ee613bd608f0a84f229474c787509925d25c2\": container with ID starting with a21b9bf3214b396160f407e2278ee613bd608f0a84f229474c787509925d25c2 not found: ID does not exist" containerID="a21b9bf3214b396160f407e2278ee613bd608f0a84f229474c787509925d25c2" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.645020 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a21b9bf3214b396160f407e2278ee613bd608f0a84f229474c787509925d25c2"} err="failed to get container status \"a21b9bf3214b396160f407e2278ee613bd608f0a84f229474c787509925d25c2\": rpc error: code = NotFound desc = could not find container \"a21b9bf3214b396160f407e2278ee613bd608f0a84f229474c787509925d25c2\": container with ID starting with a21b9bf3214b396160f407e2278ee613bd608f0a84f229474c787509925d25c2 not found: ID does not exist" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.645050 4747 scope.go:117] "RemoveContainer" containerID="eca3930d718438dbe8faf7feb421e80887a796f322068c067949e3b2500db5da" Dec 15 05:54:30 crc kubenswrapper[4747]: E1215 05:54:30.645269 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eca3930d718438dbe8faf7feb421e80887a796f322068c067949e3b2500db5da\": container with ID starting with eca3930d718438dbe8faf7feb421e80887a796f322068c067949e3b2500db5da not found: ID does not exist" containerID="eca3930d718438dbe8faf7feb421e80887a796f322068c067949e3b2500db5da" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.645291 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eca3930d718438dbe8faf7feb421e80887a796f322068c067949e3b2500db5da"} err="failed to get container status \"eca3930d718438dbe8faf7feb421e80887a796f322068c067949e3b2500db5da\": rpc error: code = NotFound desc = could not find container \"eca3930d718438dbe8faf7feb421e80887a796f322068c067949e3b2500db5da\": container with ID starting with eca3930d718438dbe8faf7feb421e80887a796f322068c067949e3b2500db5da not found: ID does not exist" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.645303 4747 scope.go:117] "RemoveContainer" containerID="a21b9bf3214b396160f407e2278ee613bd608f0a84f229474c787509925d25c2" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.645469 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a21b9bf3214b396160f407e2278ee613bd608f0a84f229474c787509925d25c2"} err="failed to get container status \"a21b9bf3214b396160f407e2278ee613bd608f0a84f229474c787509925d25c2\": rpc error: code = NotFound desc = could not find container \"a21b9bf3214b396160f407e2278ee613bd608f0a84f229474c787509925d25c2\": container with ID starting with a21b9bf3214b396160f407e2278ee613bd608f0a84f229474c787509925d25c2 not found: ID does not exist" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.645485 4747 scope.go:117] "RemoveContainer" containerID="eca3930d718438dbe8faf7feb421e80887a796f322068c067949e3b2500db5da" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.645837 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eca3930d718438dbe8faf7feb421e80887a796f322068c067949e3b2500db5da"} err="failed to get container status \"eca3930d718438dbe8faf7feb421e80887a796f322068c067949e3b2500db5da\": rpc error: code = NotFound desc = could not find container \"eca3930d718438dbe8faf7feb421e80887a796f322068c067949e3b2500db5da\": container with ID starting with eca3930d718438dbe8faf7feb421e80887a796f322068c067949e3b2500db5da not found: ID does not exist" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.645857 4747 scope.go:117] "RemoveContainer" containerID="a1fc54db2d4c7ce5dd2d9fb31fcca8703a4ffcbfae6a8d40a46b0a5793371591" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.648129 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.657332 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.665003 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.672698 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.684942 4747 scope.go:117] "RemoveContainer" containerID="a1fc54db2d4c7ce5dd2d9fb31fcca8703a4ffcbfae6a8d40a46b0a5793371591" Dec 15 05:54:30 crc kubenswrapper[4747]: E1215 05:54:30.689640 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1fc54db2d4c7ce5dd2d9fb31fcca8703a4ffcbfae6a8d40a46b0a5793371591\": container with ID starting with a1fc54db2d4c7ce5dd2d9fb31fcca8703a4ffcbfae6a8d40a46b0a5793371591 not found: ID does not exist" containerID="a1fc54db2d4c7ce5dd2d9fb31fcca8703a4ffcbfae6a8d40a46b0a5793371591" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.689688 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1fc54db2d4c7ce5dd2d9fb31fcca8703a4ffcbfae6a8d40a46b0a5793371591"} err="failed to get container status \"a1fc54db2d4c7ce5dd2d9fb31fcca8703a4ffcbfae6a8d40a46b0a5793371591\": rpc error: code = NotFound desc = could not find container \"a1fc54db2d4c7ce5dd2d9fb31fcca8703a4ffcbfae6a8d40a46b0a5793371591\": container with ID starting with a1fc54db2d4c7ce5dd2d9fb31fcca8703a4ffcbfae6a8d40a46b0a5793371591 not found: ID does not exist" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.713688 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 15 05:54:30 crc kubenswrapper[4747]: E1215 05:54:30.714561 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e28c8575-a66c-445b-af56-e85ef6fb33e5" containerName="nova-metadata-log" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.714588 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="e28c8575-a66c-445b-af56-e85ef6fb33e5" containerName="nova-metadata-log" Dec 15 05:54:30 crc kubenswrapper[4747]: E1215 05:54:30.714607 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e28c8575-a66c-445b-af56-e85ef6fb33e5" containerName="nova-metadata-metadata" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.714619 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="e28c8575-a66c-445b-af56-e85ef6fb33e5" containerName="nova-metadata-metadata" Dec 15 05:54:30 crc kubenswrapper[4747]: E1215 05:54:30.714859 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa" containerName="nova-cell1-novncproxy-novncproxy" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.714872 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa" containerName="nova-cell1-novncproxy-novncproxy" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.715500 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="e28c8575-a66c-445b-af56-e85ef6fb33e5" containerName="nova-metadata-log" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.715544 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="e28c8575-a66c-445b-af56-e85ef6fb33e5" containerName="nova-metadata-metadata" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.715575 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa" containerName="nova-cell1-novncproxy-novncproxy" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.717284 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.719839 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.721936 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.734239 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.737909 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.748627 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.750660 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.752672 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.753680 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.755686 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.772167 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a091e29-fc34-4a0a-951c-8662c11fa61e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0a091e29-fc34-4a0a-951c-8662c11fa61e\") " pod="openstack/nova-metadata-0" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.772227 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxqsv\" (UniqueName: \"kubernetes.io/projected/0a091e29-fc34-4a0a-951c-8662c11fa61e-kube-api-access-xxqsv\") pod \"nova-metadata-0\" (UID: \"0a091e29-fc34-4a0a-951c-8662c11fa61e\") " pod="openstack/nova-metadata-0" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.772252 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/51032daa-0c9c-4794-9422-2ea37212e21e-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"51032daa-0c9c-4794-9422-2ea37212e21e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.772297 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flht5\" (UniqueName: \"kubernetes.io/projected/51032daa-0c9c-4794-9422-2ea37212e21e-kube-api-access-flht5\") pod \"nova-cell1-novncproxy-0\" (UID: \"51032daa-0c9c-4794-9422-2ea37212e21e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.772458 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a091e29-fc34-4a0a-951c-8662c11fa61e-config-data\") pod \"nova-metadata-0\" (UID: \"0a091e29-fc34-4a0a-951c-8662c11fa61e\") " pod="openstack/nova-metadata-0" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.772568 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a091e29-fc34-4a0a-951c-8662c11fa61e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0a091e29-fc34-4a0a-951c-8662c11fa61e\") " pod="openstack/nova-metadata-0" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.772597 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/51032daa-0c9c-4794-9422-2ea37212e21e-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"51032daa-0c9c-4794-9422-2ea37212e21e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.772714 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a091e29-fc34-4a0a-951c-8662c11fa61e-logs\") pod \"nova-metadata-0\" (UID: \"0a091e29-fc34-4a0a-951c-8662c11fa61e\") " pod="openstack/nova-metadata-0" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.772743 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51032daa-0c9c-4794-9422-2ea37212e21e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"51032daa-0c9c-4794-9422-2ea37212e21e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.772889 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51032daa-0c9c-4794-9422-2ea37212e21e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"51032daa-0c9c-4794-9422-2ea37212e21e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.874176 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a091e29-fc34-4a0a-951c-8662c11fa61e-logs\") pod \"nova-metadata-0\" (UID: \"0a091e29-fc34-4a0a-951c-8662c11fa61e\") " pod="openstack/nova-metadata-0" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.874671 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51032daa-0c9c-4794-9422-2ea37212e21e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"51032daa-0c9c-4794-9422-2ea37212e21e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.874766 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51032daa-0c9c-4794-9422-2ea37212e21e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"51032daa-0c9c-4794-9422-2ea37212e21e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.874858 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a091e29-fc34-4a0a-951c-8662c11fa61e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0a091e29-fc34-4a0a-951c-8662c11fa61e\") " pod="openstack/nova-metadata-0" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.874963 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxqsv\" (UniqueName: \"kubernetes.io/projected/0a091e29-fc34-4a0a-951c-8662c11fa61e-kube-api-access-xxqsv\") pod \"nova-metadata-0\" (UID: \"0a091e29-fc34-4a0a-951c-8662c11fa61e\") " pod="openstack/nova-metadata-0" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.875057 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/51032daa-0c9c-4794-9422-2ea37212e21e-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"51032daa-0c9c-4794-9422-2ea37212e21e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.875152 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flht5\" (UniqueName: \"kubernetes.io/projected/51032daa-0c9c-4794-9422-2ea37212e21e-kube-api-access-flht5\") pod \"nova-cell1-novncproxy-0\" (UID: \"51032daa-0c9c-4794-9422-2ea37212e21e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.874655 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a091e29-fc34-4a0a-951c-8662c11fa61e-logs\") pod \"nova-metadata-0\" (UID: \"0a091e29-fc34-4a0a-951c-8662c11fa61e\") " pod="openstack/nova-metadata-0" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.875424 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a091e29-fc34-4a0a-951c-8662c11fa61e-config-data\") pod \"nova-metadata-0\" (UID: \"0a091e29-fc34-4a0a-951c-8662c11fa61e\") " pod="openstack/nova-metadata-0" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.878133 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a091e29-fc34-4a0a-951c-8662c11fa61e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0a091e29-fc34-4a0a-951c-8662c11fa61e\") " pod="openstack/nova-metadata-0" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.878216 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/51032daa-0c9c-4794-9422-2ea37212e21e-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"51032daa-0c9c-4794-9422-2ea37212e21e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.878992 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51032daa-0c9c-4794-9422-2ea37212e21e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"51032daa-0c9c-4794-9422-2ea37212e21e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.879003 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a091e29-fc34-4a0a-951c-8662c11fa61e-config-data\") pod \"nova-metadata-0\" (UID: \"0a091e29-fc34-4a0a-951c-8662c11fa61e\") " pod="openstack/nova-metadata-0" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.879139 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a091e29-fc34-4a0a-951c-8662c11fa61e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0a091e29-fc34-4a0a-951c-8662c11fa61e\") " pod="openstack/nova-metadata-0" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.880539 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a091e29-fc34-4a0a-951c-8662c11fa61e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0a091e29-fc34-4a0a-951c-8662c11fa61e\") " pod="openstack/nova-metadata-0" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.883741 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/51032daa-0c9c-4794-9422-2ea37212e21e-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"51032daa-0c9c-4794-9422-2ea37212e21e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.888891 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flht5\" (UniqueName: \"kubernetes.io/projected/51032daa-0c9c-4794-9422-2ea37212e21e-kube-api-access-flht5\") pod \"nova-cell1-novncproxy-0\" (UID: \"51032daa-0c9c-4794-9422-2ea37212e21e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.889454 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxqsv\" (UniqueName: \"kubernetes.io/projected/0a091e29-fc34-4a0a-951c-8662c11fa61e-kube-api-access-xxqsv\") pod \"nova-metadata-0\" (UID: \"0a091e29-fc34-4a0a-951c-8662c11fa61e\") " pod="openstack/nova-metadata-0" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.891113 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51032daa-0c9c-4794-9422-2ea37212e21e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"51032daa-0c9c-4794-9422-2ea37212e21e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 15 05:54:30 crc kubenswrapper[4747]: I1215 05:54:30.893132 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/51032daa-0c9c-4794-9422-2ea37212e21e-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"51032daa-0c9c-4794-9422-2ea37212e21e\") " pod="openstack/nova-cell1-novncproxy-0" Dec 15 05:54:31 crc kubenswrapper[4747]: I1215 05:54:31.038966 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 15 05:54:31 crc kubenswrapper[4747]: I1215 05:54:31.065877 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 15 05:54:31 crc kubenswrapper[4747]: I1215 05:54:31.474586 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 15 05:54:31 crc kubenswrapper[4747]: W1215 05:54:31.475287 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51032daa_0c9c_4794_9422_2ea37212e21e.slice/crio-5ff7ded0439c6922db17825afb47b97c61af3af7b5de61aa255775f956de5994 WatchSource:0}: Error finding container 5ff7ded0439c6922db17825afb47b97c61af3af7b5de61aa255775f956de5994: Status 404 returned error can't find the container with id 5ff7ded0439c6922db17825afb47b97c61af3af7b5de61aa255775f956de5994 Dec 15 05:54:31 crc kubenswrapper[4747]: I1215 05:54:31.521396 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 15 05:54:31 crc kubenswrapper[4747]: W1215 05:54:31.524869 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a091e29_fc34_4a0a_951c_8662c11fa61e.slice/crio-38d9576d5495a1d6f1d0e2b85dafe51e2678099e9ebc3a7e5276a81299c60d41 WatchSource:0}: Error finding container 38d9576d5495a1d6f1d0e2b85dafe51e2678099e9ebc3a7e5276a81299c60d41: Status 404 returned error can't find the container with id 38d9576d5495a1d6f1d0e2b85dafe51e2678099e9ebc3a7e5276a81299c60d41 Dec 15 05:54:31 crc kubenswrapper[4747]: I1215 05:54:31.596665 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"51032daa-0c9c-4794-9422-2ea37212e21e","Type":"ContainerStarted","Data":"5ff7ded0439c6922db17825afb47b97c61af3af7b5de61aa255775f956de5994"} Dec 15 05:54:31 crc kubenswrapper[4747]: I1215 05:54:31.598489 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0a091e29-fc34-4a0a-951c-8662c11fa61e","Type":"ContainerStarted","Data":"38d9576d5495a1d6f1d0e2b85dafe51e2678099e9ebc3a7e5276a81299c60d41"} Dec 15 05:54:31 crc kubenswrapper[4747]: I1215 05:54:31.975331 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-p84db" Dec 15 05:54:31 crc kubenswrapper[4747]: I1215 05:54:31.982611 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-9xp4w" Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.009470 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9eb96a2-d315-4936-96e4-0be39cf72b0a-combined-ca-bundle\") pod \"e9eb96a2-d315-4936-96e4-0be39cf72b0a\" (UID: \"e9eb96a2-d315-4936-96e4-0be39cf72b0a\") " Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.009663 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9eb96a2-d315-4936-96e4-0be39cf72b0a-config-data\") pod \"e9eb96a2-d315-4936-96e4-0be39cf72b0a\" (UID: \"e9eb96a2-d315-4936-96e4-0be39cf72b0a\") " Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.009795 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6-combined-ca-bundle\") pod \"edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6\" (UID: \"edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6\") " Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.009889 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rv9bg\" (UniqueName: \"kubernetes.io/projected/e9eb96a2-d315-4936-96e4-0be39cf72b0a-kube-api-access-rv9bg\") pod \"e9eb96a2-d315-4936-96e4-0be39cf72b0a\" (UID: \"e9eb96a2-d315-4936-96e4-0be39cf72b0a\") " Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.010115 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6-scripts\") pod \"edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6\" (UID: \"edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6\") " Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.010567 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnm2d\" (UniqueName: \"kubernetes.io/projected/edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6-kube-api-access-hnm2d\") pod \"edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6\" (UID: \"edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6\") " Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.010744 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6-config-data\") pod \"edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6\" (UID: \"edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6\") " Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.011103 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9eb96a2-d315-4936-96e4-0be39cf72b0a-scripts\") pod \"e9eb96a2-d315-4936-96e4-0be39cf72b0a\" (UID: \"e9eb96a2-d315-4936-96e4-0be39cf72b0a\") " Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.015076 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9eb96a2-d315-4936-96e4-0be39cf72b0a-kube-api-access-rv9bg" (OuterVolumeSpecName: "kube-api-access-rv9bg") pod "e9eb96a2-d315-4936-96e4-0be39cf72b0a" (UID: "e9eb96a2-d315-4936-96e4-0be39cf72b0a"). InnerVolumeSpecName "kube-api-access-rv9bg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.015294 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6-scripts" (OuterVolumeSpecName: "scripts") pod "edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6" (UID: "edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.017279 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6-kube-api-access-hnm2d" (OuterVolumeSpecName: "kube-api-access-hnm2d") pod "edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6" (UID: "edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6"). InnerVolumeSpecName "kube-api-access-hnm2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.017391 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9eb96a2-d315-4936-96e4-0be39cf72b0a-scripts" (OuterVolumeSpecName: "scripts") pod "e9eb96a2-d315-4936-96e4-0be39cf72b0a" (UID: "e9eb96a2-d315-4936-96e4-0be39cf72b0a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.035741 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6" (UID: "edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.037518 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6-config-data" (OuterVolumeSpecName: "config-data") pod "edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6" (UID: "edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.037535 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9eb96a2-d315-4936-96e4-0be39cf72b0a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e9eb96a2-d315-4936-96e4-0be39cf72b0a" (UID: "e9eb96a2-d315-4936-96e4-0be39cf72b0a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.037963 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9eb96a2-d315-4936-96e4-0be39cf72b0a-config-data" (OuterVolumeSpecName: "config-data") pod "e9eb96a2-d315-4936-96e4-0be39cf72b0a" (UID: "e9eb96a2-d315-4936-96e4-0be39cf72b0a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.114092 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6-scripts\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.114151 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnm2d\" (UniqueName: \"kubernetes.io/projected/edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6-kube-api-access-hnm2d\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.114163 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6-config-data\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.114189 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9eb96a2-d315-4936-96e4-0be39cf72b0a-scripts\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.114200 4747 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9eb96a2-d315-4936-96e4-0be39cf72b0a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.114210 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9eb96a2-d315-4936-96e4-0be39cf72b0a-config-data\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.114219 4747 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.114239 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rv9bg\" (UniqueName: \"kubernetes.io/projected/e9eb96a2-d315-4936-96e4-0be39cf72b0a-kube-api-access-rv9bg\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.670951 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-9xp4w" Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.672413 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa" path="/var/lib/kubelet/pods/0c25ca5b-7708-4e2f-a5b7-6e37b50d44aa/volumes" Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.673128 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e28c8575-a66c-445b-af56-e85ef6fb33e5" path="/var/lib/kubelet/pods/e28c8575-a66c-445b-af56-e85ef6fb33e5/volumes" Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.674201 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"51032daa-0c9c-4794-9422-2ea37212e21e","Type":"ContainerStarted","Data":"938d7dc5e47859ee47c3d0d380ffc58d5102d470a2116ac3036464217bcd43d8"} Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.674239 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-9xp4w" event={"ID":"e9eb96a2-d315-4936-96e4-0be39cf72b0a","Type":"ContainerDied","Data":"e3a0c1493d6f1922804f3296fa84b185c1668904cda0f10b3f6dc11463b76c04"} Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.674257 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3a0c1493d6f1922804f3296fa84b185c1668904cda0f10b3f6dc11463b76c04" Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.699338 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.699319527 podStartE2EDuration="2.699319527s" podCreationTimestamp="2025-12-15 05:54:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:54:32.670678328 +0000 UTC m=+1036.367190245" watchObservedRunningTime="2025-12-15 05:54:32.699319527 +0000 UTC m=+1036.395831444" Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.699472 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-p84db" event={"ID":"edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6","Type":"ContainerDied","Data":"fa68624167910240adec852a6b5d777e2cf959f3e597644c6e690097c3757dee"} Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.707474 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa68624167910240adec852a6b5d777e2cf959f3e597644c6e690097c3757dee" Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.699556 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-p84db" Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.707517 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0a091e29-fc34-4a0a-951c-8662c11fa61e","Type":"ContainerStarted","Data":"279ba980f1ba780b63a7f73d44522079261b4dec6abc07a60a197b54f93230e6"} Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.707551 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0a091e29-fc34-4a0a-951c-8662c11fa61e","Type":"ContainerStarted","Data":"54173d1e4cfee35dde023955bfaeb8964ac6f4c562f33871c9fcd74f060493e9"} Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.747938 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 15 05:54:32 crc kubenswrapper[4747]: E1215 05:54:32.748401 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9eb96a2-d315-4936-96e4-0be39cf72b0a" containerName="nova-manage" Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.748421 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9eb96a2-d315-4936-96e4-0be39cf72b0a" containerName="nova-manage" Dec 15 05:54:32 crc kubenswrapper[4747]: E1215 05:54:32.748444 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6" containerName="nova-cell1-conductor-db-sync" Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.748451 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6" containerName="nova-cell1-conductor-db-sync" Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.748659 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6" containerName="nova-cell1-conductor-db-sync" Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.748687 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9eb96a2-d315-4936-96e4-0be39cf72b0a" containerName="nova-manage" Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.749368 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.751880 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.759138 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.759497 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.759484245 podStartE2EDuration="2.759484245s" podCreationTimestamp="2025-12-15 05:54:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:54:32.720500447 +0000 UTC m=+1036.417012364" watchObservedRunningTime="2025-12-15 05:54:32.759484245 +0000 UTC m=+1036.455996162" Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.799914 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.800450 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c2f3889f-c077-4a30-8d64-348930019517" containerName="nova-api-log" containerID="cri-o://ca0bb43cac36d0e0529579796ba3cc4fe5536d1ed035e618cced2014ebc86f8f" gracePeriod=30 Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.801105 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c2f3889f-c077-4a30-8d64-348930019517" containerName="nova-api-api" containerID="cri-o://e444f11f91d05c08e98d60b9ba428826f3608d6b6ca60f3ad4f691930574b6bd" gracePeriod=30 Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.810832 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.811644 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="217ecd6d-89af-4db7-a261-ec968737d482" containerName="nova-scheduler-scheduler" containerID="cri-o://fc5b2f3d1f6b8c2f84624343838b868dc02abc52706fc9032df4d871b958bf38" gracePeriod=30 Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.821435 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.831902 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8472a77c-3c9b-4fa1-9572-cc21f9c2b814-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8472a77c-3c9b-4fa1-9572-cc21f9c2b814\") " pod="openstack/nova-cell1-conductor-0" Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.832108 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8472a77c-3c9b-4fa1-9572-cc21f9c2b814-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8472a77c-3c9b-4fa1-9572-cc21f9c2b814\") " pod="openstack/nova-cell1-conductor-0" Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.832146 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxsv9\" (UniqueName: \"kubernetes.io/projected/8472a77c-3c9b-4fa1-9572-cc21f9c2b814-kube-api-access-gxsv9\") pod \"nova-cell1-conductor-0\" (UID: \"8472a77c-3c9b-4fa1-9572-cc21f9c2b814\") " pod="openstack/nova-cell1-conductor-0" Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.934855 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8472a77c-3c9b-4fa1-9572-cc21f9c2b814-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8472a77c-3c9b-4fa1-9572-cc21f9c2b814\") " pod="openstack/nova-cell1-conductor-0" Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.935874 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8472a77c-3c9b-4fa1-9572-cc21f9c2b814-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8472a77c-3c9b-4fa1-9572-cc21f9c2b814\") " pod="openstack/nova-cell1-conductor-0" Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.936056 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxsv9\" (UniqueName: \"kubernetes.io/projected/8472a77c-3c9b-4fa1-9572-cc21f9c2b814-kube-api-access-gxsv9\") pod \"nova-cell1-conductor-0\" (UID: \"8472a77c-3c9b-4fa1-9572-cc21f9c2b814\") " pod="openstack/nova-cell1-conductor-0" Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.939659 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8472a77c-3c9b-4fa1-9572-cc21f9c2b814-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8472a77c-3c9b-4fa1-9572-cc21f9c2b814\") " pod="openstack/nova-cell1-conductor-0" Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.939857 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8472a77c-3c9b-4fa1-9572-cc21f9c2b814-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8472a77c-3c9b-4fa1-9572-cc21f9c2b814\") " pod="openstack/nova-cell1-conductor-0" Dec 15 05:54:32 crc kubenswrapper[4747]: I1215 05:54:32.951140 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxsv9\" (UniqueName: \"kubernetes.io/projected/8472a77c-3c9b-4fa1-9572-cc21f9c2b814-kube-api-access-gxsv9\") pod \"nova-cell1-conductor-0\" (UID: \"8472a77c-3c9b-4fa1-9572-cc21f9c2b814\") " pod="openstack/nova-cell1-conductor-0" Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.076984 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.306522 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.451658 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2f3889f-c077-4a30-8d64-348930019517-config-data\") pod \"c2f3889f-c077-4a30-8d64-348930019517\" (UID: \"c2f3889f-c077-4a30-8d64-348930019517\") " Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.451706 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2f3889f-c077-4a30-8d64-348930019517-logs\") pod \"c2f3889f-c077-4a30-8d64-348930019517\" (UID: \"c2f3889f-c077-4a30-8d64-348930019517\") " Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.451737 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2f3889f-c077-4a30-8d64-348930019517-combined-ca-bundle\") pod \"c2f3889f-c077-4a30-8d64-348930019517\" (UID: \"c2f3889f-c077-4a30-8d64-348930019517\") " Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.451899 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twgpf\" (UniqueName: \"kubernetes.io/projected/c2f3889f-c077-4a30-8d64-348930019517-kube-api-access-twgpf\") pod \"c2f3889f-c077-4a30-8d64-348930019517\" (UID: \"c2f3889f-c077-4a30-8d64-348930019517\") " Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.452396 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2f3889f-c077-4a30-8d64-348930019517-logs" (OuterVolumeSpecName: "logs") pod "c2f3889f-c077-4a30-8d64-348930019517" (UID: "c2f3889f-c077-4a30-8d64-348930019517"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.458029 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2f3889f-c077-4a30-8d64-348930019517-kube-api-access-twgpf" (OuterVolumeSpecName: "kube-api-access-twgpf") pod "c2f3889f-c077-4a30-8d64-348930019517" (UID: "c2f3889f-c077-4a30-8d64-348930019517"). InnerVolumeSpecName "kube-api-access-twgpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.478591 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2f3889f-c077-4a30-8d64-348930019517-config-data" (OuterVolumeSpecName: "config-data") pod "c2f3889f-c077-4a30-8d64-348930019517" (UID: "c2f3889f-c077-4a30-8d64-348930019517"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.480976 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2f3889f-c077-4a30-8d64-348930019517-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c2f3889f-c077-4a30-8d64-348930019517" (UID: "c2f3889f-c077-4a30-8d64-348930019517"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.524336 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 15 05:54:33 crc kubenswrapper[4747]: W1215 05:54:33.524984 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8472a77c_3c9b_4fa1_9572_cc21f9c2b814.slice/crio-9ba6caf404905d5faeec85b32e5b1955f339298ccb08e04d55dd6eb2f27c0124 WatchSource:0}: Error finding container 9ba6caf404905d5faeec85b32e5b1955f339298ccb08e04d55dd6eb2f27c0124: Status 404 returned error can't find the container with id 9ba6caf404905d5faeec85b32e5b1955f339298ccb08e04d55dd6eb2f27c0124 Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.554991 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twgpf\" (UniqueName: \"kubernetes.io/projected/c2f3889f-c077-4a30-8d64-348930019517-kube-api-access-twgpf\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.555021 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2f3889f-c077-4a30-8d64-348930019517-config-data\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.555034 4747 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2f3889f-c077-4a30-8d64-348930019517-logs\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.555046 4747 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2f3889f-c077-4a30-8d64-348930019517-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.718811 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8472a77c-3c9b-4fa1-9572-cc21f9c2b814","Type":"ContainerStarted","Data":"a6fb3db16f537059697c66dd21605307e4585299cc36e28d951061b6978c9081"} Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.719234 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8472a77c-3c9b-4fa1-9572-cc21f9c2b814","Type":"ContainerStarted","Data":"9ba6caf404905d5faeec85b32e5b1955f339298ccb08e04d55dd6eb2f27c0124"} Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.719278 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.723825 4747 generic.go:334] "Generic (PLEG): container finished" podID="c2f3889f-c077-4a30-8d64-348930019517" containerID="e444f11f91d05c08e98d60b9ba428826f3608d6b6ca60f3ad4f691930574b6bd" exitCode=0 Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.723865 4747 generic.go:334] "Generic (PLEG): container finished" podID="c2f3889f-c077-4a30-8d64-348930019517" containerID="ca0bb43cac36d0e0529579796ba3cc4fe5536d1ed035e618cced2014ebc86f8f" exitCode=143 Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.723957 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.723954 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c2f3889f-c077-4a30-8d64-348930019517","Type":"ContainerDied","Data":"e444f11f91d05c08e98d60b9ba428826f3608d6b6ca60f3ad4f691930574b6bd"} Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.724037 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c2f3889f-c077-4a30-8d64-348930019517","Type":"ContainerDied","Data":"ca0bb43cac36d0e0529579796ba3cc4fe5536d1ed035e618cced2014ebc86f8f"} Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.724049 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c2f3889f-c077-4a30-8d64-348930019517","Type":"ContainerDied","Data":"44b85910aff7f73fd236fe2fc84dfddea22b5c2ebf97104e0141ae7115668843"} Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.724073 4747 scope.go:117] "RemoveContainer" containerID="e444f11f91d05c08e98d60b9ba428826f3608d6b6ca60f3ad4f691930574b6bd" Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.751010 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=1.750990325 podStartE2EDuration="1.750990325s" podCreationTimestamp="2025-12-15 05:54:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:54:33.737178481 +0000 UTC m=+1037.433690398" watchObservedRunningTime="2025-12-15 05:54:33.750990325 +0000 UTC m=+1037.447502241" Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.772431 4747 scope.go:117] "RemoveContainer" containerID="ca0bb43cac36d0e0529579796ba3cc4fe5536d1ed035e618cced2014ebc86f8f" Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.774192 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.788173 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.803911 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.804351 4747 scope.go:117] "RemoveContainer" containerID="e444f11f91d05c08e98d60b9ba428826f3608d6b6ca60f3ad4f691930574b6bd" Dec 15 05:54:33 crc kubenswrapper[4747]: E1215 05:54:33.804451 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2f3889f-c077-4a30-8d64-348930019517" containerName="nova-api-log" Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.804466 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2f3889f-c077-4a30-8d64-348930019517" containerName="nova-api-log" Dec 15 05:54:33 crc kubenswrapper[4747]: E1215 05:54:33.804483 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2f3889f-c077-4a30-8d64-348930019517" containerName="nova-api-api" Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.804489 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2f3889f-c077-4a30-8d64-348930019517" containerName="nova-api-api" Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.804673 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2f3889f-c077-4a30-8d64-348930019517" containerName="nova-api-api" Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.804690 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2f3889f-c077-4a30-8d64-348930019517" containerName="nova-api-log" Dec 15 05:54:33 crc kubenswrapper[4747]: E1215 05:54:33.804734 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e444f11f91d05c08e98d60b9ba428826f3608d6b6ca60f3ad4f691930574b6bd\": container with ID starting with e444f11f91d05c08e98d60b9ba428826f3608d6b6ca60f3ad4f691930574b6bd not found: ID does not exist" containerID="e444f11f91d05c08e98d60b9ba428826f3608d6b6ca60f3ad4f691930574b6bd" Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.804768 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e444f11f91d05c08e98d60b9ba428826f3608d6b6ca60f3ad4f691930574b6bd"} err="failed to get container status \"e444f11f91d05c08e98d60b9ba428826f3608d6b6ca60f3ad4f691930574b6bd\": rpc error: code = NotFound desc = could not find container \"e444f11f91d05c08e98d60b9ba428826f3608d6b6ca60f3ad4f691930574b6bd\": container with ID starting with e444f11f91d05c08e98d60b9ba428826f3608d6b6ca60f3ad4f691930574b6bd not found: ID does not exist" Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.804794 4747 scope.go:117] "RemoveContainer" containerID="ca0bb43cac36d0e0529579796ba3cc4fe5536d1ed035e618cced2014ebc86f8f" Dec 15 05:54:33 crc kubenswrapper[4747]: E1215 05:54:33.805067 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca0bb43cac36d0e0529579796ba3cc4fe5536d1ed035e618cced2014ebc86f8f\": container with ID starting with ca0bb43cac36d0e0529579796ba3cc4fe5536d1ed035e618cced2014ebc86f8f not found: ID does not exist" containerID="ca0bb43cac36d0e0529579796ba3cc4fe5536d1ed035e618cced2014ebc86f8f" Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.805097 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca0bb43cac36d0e0529579796ba3cc4fe5536d1ed035e618cced2014ebc86f8f"} err="failed to get container status \"ca0bb43cac36d0e0529579796ba3cc4fe5536d1ed035e618cced2014ebc86f8f\": rpc error: code = NotFound desc = could not find container \"ca0bb43cac36d0e0529579796ba3cc4fe5536d1ed035e618cced2014ebc86f8f\": container with ID starting with ca0bb43cac36d0e0529579796ba3cc4fe5536d1ed035e618cced2014ebc86f8f not found: ID does not exist" Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.805115 4747 scope.go:117] "RemoveContainer" containerID="e444f11f91d05c08e98d60b9ba428826f3608d6b6ca60f3ad4f691930574b6bd" Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.805301 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e444f11f91d05c08e98d60b9ba428826f3608d6b6ca60f3ad4f691930574b6bd"} err="failed to get container status \"e444f11f91d05c08e98d60b9ba428826f3608d6b6ca60f3ad4f691930574b6bd\": rpc error: code = NotFound desc = could not find container \"e444f11f91d05c08e98d60b9ba428826f3608d6b6ca60f3ad4f691930574b6bd\": container with ID starting with e444f11f91d05c08e98d60b9ba428826f3608d6b6ca60f3ad4f691930574b6bd not found: ID does not exist" Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.805323 4747 scope.go:117] "RemoveContainer" containerID="ca0bb43cac36d0e0529579796ba3cc4fe5536d1ed035e618cced2014ebc86f8f" Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.805634 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca0bb43cac36d0e0529579796ba3cc4fe5536d1ed035e618cced2014ebc86f8f"} err="failed to get container status \"ca0bb43cac36d0e0529579796ba3cc4fe5536d1ed035e618cced2014ebc86f8f\": rpc error: code = NotFound desc = could not find container \"ca0bb43cac36d0e0529579796ba3cc4fe5536d1ed035e618cced2014ebc86f8f\": container with ID starting with ca0bb43cac36d0e0529579796ba3cc4fe5536d1ed035e618cced2014ebc86f8f not found: ID does not exist" Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.805807 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.807725 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.810378 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.862675 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npxxg\" (UniqueName: \"kubernetes.io/projected/7cb287d9-6fdc-4ecd-80aa-fff39ebd8679-kube-api-access-npxxg\") pod \"nova-api-0\" (UID: \"7cb287d9-6fdc-4ecd-80aa-fff39ebd8679\") " pod="openstack/nova-api-0" Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.862844 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cb287d9-6fdc-4ecd-80aa-fff39ebd8679-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7cb287d9-6fdc-4ecd-80aa-fff39ebd8679\") " pod="openstack/nova-api-0" Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.862978 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cb287d9-6fdc-4ecd-80aa-fff39ebd8679-logs\") pod \"nova-api-0\" (UID: \"7cb287d9-6fdc-4ecd-80aa-fff39ebd8679\") " pod="openstack/nova-api-0" Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.863044 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cb287d9-6fdc-4ecd-80aa-fff39ebd8679-config-data\") pod \"nova-api-0\" (UID: \"7cb287d9-6fdc-4ecd-80aa-fff39ebd8679\") " pod="openstack/nova-api-0" Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.964424 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npxxg\" (UniqueName: \"kubernetes.io/projected/7cb287d9-6fdc-4ecd-80aa-fff39ebd8679-kube-api-access-npxxg\") pod \"nova-api-0\" (UID: \"7cb287d9-6fdc-4ecd-80aa-fff39ebd8679\") " pod="openstack/nova-api-0" Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.964505 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cb287d9-6fdc-4ecd-80aa-fff39ebd8679-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7cb287d9-6fdc-4ecd-80aa-fff39ebd8679\") " pod="openstack/nova-api-0" Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.964561 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cb287d9-6fdc-4ecd-80aa-fff39ebd8679-logs\") pod \"nova-api-0\" (UID: \"7cb287d9-6fdc-4ecd-80aa-fff39ebd8679\") " pod="openstack/nova-api-0" Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.964611 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cb287d9-6fdc-4ecd-80aa-fff39ebd8679-config-data\") pod \"nova-api-0\" (UID: \"7cb287d9-6fdc-4ecd-80aa-fff39ebd8679\") " pod="openstack/nova-api-0" Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.965261 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cb287d9-6fdc-4ecd-80aa-fff39ebd8679-logs\") pod \"nova-api-0\" (UID: \"7cb287d9-6fdc-4ecd-80aa-fff39ebd8679\") " pod="openstack/nova-api-0" Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.968690 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cb287d9-6fdc-4ecd-80aa-fff39ebd8679-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7cb287d9-6fdc-4ecd-80aa-fff39ebd8679\") " pod="openstack/nova-api-0" Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.969557 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cb287d9-6fdc-4ecd-80aa-fff39ebd8679-config-data\") pod \"nova-api-0\" (UID: \"7cb287d9-6fdc-4ecd-80aa-fff39ebd8679\") " pod="openstack/nova-api-0" Dec 15 05:54:33 crc kubenswrapper[4747]: I1215 05:54:33.983329 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npxxg\" (UniqueName: \"kubernetes.io/projected/7cb287d9-6fdc-4ecd-80aa-fff39ebd8679-kube-api-access-npxxg\") pod \"nova-api-0\" (UID: \"7cb287d9-6fdc-4ecd-80aa-fff39ebd8679\") " pod="openstack/nova-api-0" Dec 15 05:54:34 crc kubenswrapper[4747]: I1215 05:54:34.122144 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 15 05:54:34 crc kubenswrapper[4747]: I1215 05:54:34.365204 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78556b4b47-jdvwf" Dec 15 05:54:34 crc kubenswrapper[4747]: I1215 05:54:34.433346 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-695946c66c-cs66k"] Dec 15 05:54:34 crc kubenswrapper[4747]: I1215 05:54:34.433664 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-695946c66c-cs66k" podUID="a5a72256-21cc-42e2-bdfb-b1d372846404" containerName="dnsmasq-dns" containerID="cri-o://a9aeda417c28e50e39fd3ad75c3cf3ffd9ceaed4277e8b7caa6adbd17ade0f1e" gracePeriod=10 Dec 15 05:54:34 crc kubenswrapper[4747]: I1215 05:54:34.508999 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 15 05:54:34 crc kubenswrapper[4747]: I1215 05:54:34.589120 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 15 05:54:34 crc kubenswrapper[4747]: I1215 05:54:34.687788 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/217ecd6d-89af-4db7-a261-ec968737d482-config-data\") pod \"217ecd6d-89af-4db7-a261-ec968737d482\" (UID: \"217ecd6d-89af-4db7-a261-ec968737d482\") " Dec 15 05:54:34 crc kubenswrapper[4747]: I1215 05:54:34.687837 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6brt\" (UniqueName: \"kubernetes.io/projected/217ecd6d-89af-4db7-a261-ec968737d482-kube-api-access-w6brt\") pod \"217ecd6d-89af-4db7-a261-ec968737d482\" (UID: \"217ecd6d-89af-4db7-a261-ec968737d482\") " Dec 15 05:54:34 crc kubenswrapper[4747]: I1215 05:54:34.687879 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/217ecd6d-89af-4db7-a261-ec968737d482-combined-ca-bundle\") pod \"217ecd6d-89af-4db7-a261-ec968737d482\" (UID: \"217ecd6d-89af-4db7-a261-ec968737d482\") " Dec 15 05:54:34 crc kubenswrapper[4747]: I1215 05:54:34.691091 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2f3889f-c077-4a30-8d64-348930019517" path="/var/lib/kubelet/pods/c2f3889f-c077-4a30-8d64-348930019517/volumes" Dec 15 05:54:34 crc kubenswrapper[4747]: I1215 05:54:34.701795 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/217ecd6d-89af-4db7-a261-ec968737d482-kube-api-access-w6brt" (OuterVolumeSpecName: "kube-api-access-w6brt") pod "217ecd6d-89af-4db7-a261-ec968737d482" (UID: "217ecd6d-89af-4db7-a261-ec968737d482"). InnerVolumeSpecName "kube-api-access-w6brt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:54:34 crc kubenswrapper[4747]: I1215 05:54:34.726185 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/217ecd6d-89af-4db7-a261-ec968737d482-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "217ecd6d-89af-4db7-a261-ec968737d482" (UID: "217ecd6d-89af-4db7-a261-ec968737d482"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:54:34 crc kubenswrapper[4747]: I1215 05:54:34.728118 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/217ecd6d-89af-4db7-a261-ec968737d482-config-data" (OuterVolumeSpecName: "config-data") pod "217ecd6d-89af-4db7-a261-ec968737d482" (UID: "217ecd6d-89af-4db7-a261-ec968737d482"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:54:34 crc kubenswrapper[4747]: I1215 05:54:34.739202 4747 generic.go:334] "Generic (PLEG): container finished" podID="217ecd6d-89af-4db7-a261-ec968737d482" containerID="fc5b2f3d1f6b8c2f84624343838b868dc02abc52706fc9032df4d871b958bf38" exitCode=0 Dec 15 05:54:34 crc kubenswrapper[4747]: I1215 05:54:34.739376 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 15 05:54:34 crc kubenswrapper[4747]: I1215 05:54:34.745403 4747 generic.go:334] "Generic (PLEG): container finished" podID="a5a72256-21cc-42e2-bdfb-b1d372846404" containerID="a9aeda417c28e50e39fd3ad75c3cf3ffd9ceaed4277e8b7caa6adbd17ade0f1e" exitCode=0 Dec 15 05:54:34 crc kubenswrapper[4747]: I1215 05:54:34.753042 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0a091e29-fc34-4a0a-951c-8662c11fa61e" containerName="nova-metadata-log" containerID="cri-o://54173d1e4cfee35dde023955bfaeb8964ac6f4c562f33871c9fcd74f060493e9" gracePeriod=30 Dec 15 05:54:34 crc kubenswrapper[4747]: I1215 05:54:34.753533 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0a091e29-fc34-4a0a-951c-8662c11fa61e" containerName="nova-metadata-metadata" containerID="cri-o://279ba980f1ba780b63a7f73d44522079261b4dec6abc07a60a197b54f93230e6" gracePeriod=30 Dec 15 05:54:34 crc kubenswrapper[4747]: I1215 05:54:34.789371 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/217ecd6d-89af-4db7-a261-ec968737d482-config-data\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:34 crc kubenswrapper[4747]: I1215 05:54:34.789402 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6brt\" (UniqueName: \"kubernetes.io/projected/217ecd6d-89af-4db7-a261-ec968737d482-kube-api-access-w6brt\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:34 crc kubenswrapper[4747]: I1215 05:54:34.789414 4747 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/217ecd6d-89af-4db7-a261-ec968737d482-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:34 crc kubenswrapper[4747]: I1215 05:54:34.793485 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"217ecd6d-89af-4db7-a261-ec968737d482","Type":"ContainerDied","Data":"fc5b2f3d1f6b8c2f84624343838b868dc02abc52706fc9032df4d871b958bf38"} Dec 15 05:54:34 crc kubenswrapper[4747]: I1215 05:54:34.793524 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"217ecd6d-89af-4db7-a261-ec968737d482","Type":"ContainerDied","Data":"8e0fe39aa8280b7316b43b161a6b3b877f7ab9a3cbaebeb8825674315f867644"} Dec 15 05:54:34 crc kubenswrapper[4747]: I1215 05:54:34.793539 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-695946c66c-cs66k" event={"ID":"a5a72256-21cc-42e2-bdfb-b1d372846404","Type":"ContainerDied","Data":"a9aeda417c28e50e39fd3ad75c3cf3ffd9ceaed4277e8b7caa6adbd17ade0f1e"} Dec 15 05:54:34 crc kubenswrapper[4747]: I1215 05:54:34.793552 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7cb287d9-6fdc-4ecd-80aa-fff39ebd8679","Type":"ContainerStarted","Data":"3838102619e02d9a8b656d1654c863d9ebad0a26554d767c58f7a17ec0424eac"} Dec 15 05:54:34 crc kubenswrapper[4747]: I1215 05:54:34.793571 4747 scope.go:117] "RemoveContainer" containerID="fc5b2f3d1f6b8c2f84624343838b868dc02abc52706fc9032df4d871b958bf38" Dec 15 05:54:34 crc kubenswrapper[4747]: I1215 05:54:34.830068 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 15 05:54:34 crc kubenswrapper[4747]: I1215 05:54:34.832658 4747 scope.go:117] "RemoveContainer" containerID="fc5b2f3d1f6b8c2f84624343838b868dc02abc52706fc9032df4d871b958bf38" Dec 15 05:54:34 crc kubenswrapper[4747]: E1215 05:54:34.834326 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc5b2f3d1f6b8c2f84624343838b868dc02abc52706fc9032df4d871b958bf38\": container with ID starting with fc5b2f3d1f6b8c2f84624343838b868dc02abc52706fc9032df4d871b958bf38 not found: ID does not exist" containerID="fc5b2f3d1f6b8c2f84624343838b868dc02abc52706fc9032df4d871b958bf38" Dec 15 05:54:34 crc kubenswrapper[4747]: I1215 05:54:34.834380 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc5b2f3d1f6b8c2f84624343838b868dc02abc52706fc9032df4d871b958bf38"} err="failed to get container status \"fc5b2f3d1f6b8c2f84624343838b868dc02abc52706fc9032df4d871b958bf38\": rpc error: code = NotFound desc = could not find container \"fc5b2f3d1f6b8c2f84624343838b868dc02abc52706fc9032df4d871b958bf38\": container with ID starting with fc5b2f3d1f6b8c2f84624343838b868dc02abc52706fc9032df4d871b958bf38 not found: ID does not exist" Dec 15 05:54:34 crc kubenswrapper[4747]: I1215 05:54:34.840682 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 15 05:54:34 crc kubenswrapper[4747]: I1215 05:54:34.853445 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 15 05:54:34 crc kubenswrapper[4747]: E1215 05:54:34.853849 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="217ecd6d-89af-4db7-a261-ec968737d482" containerName="nova-scheduler-scheduler" Dec 15 05:54:34 crc kubenswrapper[4747]: I1215 05:54:34.853869 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="217ecd6d-89af-4db7-a261-ec968737d482" containerName="nova-scheduler-scheduler" Dec 15 05:54:34 crc kubenswrapper[4747]: I1215 05:54:34.854072 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="217ecd6d-89af-4db7-a261-ec968737d482" containerName="nova-scheduler-scheduler" Dec 15 05:54:34 crc kubenswrapper[4747]: I1215 05:54:34.855651 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 15 05:54:34 crc kubenswrapper[4747]: I1215 05:54:34.858691 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 15 05:54:34 crc kubenswrapper[4747]: I1215 05:54:34.885669 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 15 05:54:34 crc kubenswrapper[4747]: I1215 05:54:34.891163 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84d03e88-8521-4da8-9d5b-28185ed47abc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"84d03e88-8521-4da8-9d5b-28185ed47abc\") " pod="openstack/nova-scheduler-0" Dec 15 05:54:34 crc kubenswrapper[4747]: I1215 05:54:34.891302 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdjl6\" (UniqueName: \"kubernetes.io/projected/84d03e88-8521-4da8-9d5b-28185ed47abc-kube-api-access-kdjl6\") pod \"nova-scheduler-0\" (UID: \"84d03e88-8521-4da8-9d5b-28185ed47abc\") " pod="openstack/nova-scheduler-0" Dec 15 05:54:34 crc kubenswrapper[4747]: I1215 05:54:34.891408 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84d03e88-8521-4da8-9d5b-28185ed47abc-config-data\") pod \"nova-scheduler-0\" (UID: \"84d03e88-8521-4da8-9d5b-28185ed47abc\") " pod="openstack/nova-scheduler-0" Dec 15 05:54:34 crc kubenswrapper[4747]: I1215 05:54:34.929667 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 15 05:54:34 crc kubenswrapper[4747]: I1215 05:54:34.996420 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84d03e88-8521-4da8-9d5b-28185ed47abc-config-data\") pod \"nova-scheduler-0\" (UID: \"84d03e88-8521-4da8-9d5b-28185ed47abc\") " pod="openstack/nova-scheduler-0" Dec 15 05:54:34 crc kubenswrapper[4747]: I1215 05:54:34.996616 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84d03e88-8521-4da8-9d5b-28185ed47abc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"84d03e88-8521-4da8-9d5b-28185ed47abc\") " pod="openstack/nova-scheduler-0" Dec 15 05:54:34 crc kubenswrapper[4747]: I1215 05:54:34.996741 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdjl6\" (UniqueName: \"kubernetes.io/projected/84d03e88-8521-4da8-9d5b-28185ed47abc-kube-api-access-kdjl6\") pod \"nova-scheduler-0\" (UID: \"84d03e88-8521-4da8-9d5b-28185ed47abc\") " pod="openstack/nova-scheduler-0" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.002610 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84d03e88-8521-4da8-9d5b-28185ed47abc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"84d03e88-8521-4da8-9d5b-28185ed47abc\") " pod="openstack/nova-scheduler-0" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.004640 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84d03e88-8521-4da8-9d5b-28185ed47abc-config-data\") pod \"nova-scheduler-0\" (UID: \"84d03e88-8521-4da8-9d5b-28185ed47abc\") " pod="openstack/nova-scheduler-0" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.018279 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdjl6\" (UniqueName: \"kubernetes.io/projected/84d03e88-8521-4da8-9d5b-28185ed47abc-kube-api-access-kdjl6\") pod \"nova-scheduler-0\" (UID: \"84d03e88-8521-4da8-9d5b-28185ed47abc\") " pod="openstack/nova-scheduler-0" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.022581 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-695946c66c-cs66k" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.200309 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5a72256-21cc-42e2-bdfb-b1d372846404-dns-svc\") pod \"a5a72256-21cc-42e2-bdfb-b1d372846404\" (UID: \"a5a72256-21cc-42e2-bdfb-b1d372846404\") " Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.200730 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a5a72256-21cc-42e2-bdfb-b1d372846404-ovsdbserver-nb\") pod \"a5a72256-21cc-42e2-bdfb-b1d372846404\" (UID: \"a5a72256-21cc-42e2-bdfb-b1d372846404\") " Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.200789 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a5a72256-21cc-42e2-bdfb-b1d372846404-ovsdbserver-sb\") pod \"a5a72256-21cc-42e2-bdfb-b1d372846404\" (UID: \"a5a72256-21cc-42e2-bdfb-b1d372846404\") " Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.201423 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a5a72256-21cc-42e2-bdfb-b1d372846404-dns-swift-storage-0\") pod \"a5a72256-21cc-42e2-bdfb-b1d372846404\" (UID: \"a5a72256-21cc-42e2-bdfb-b1d372846404\") " Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.201490 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5a72256-21cc-42e2-bdfb-b1d372846404-config\") pod \"a5a72256-21cc-42e2-bdfb-b1d372846404\" (UID: \"a5a72256-21cc-42e2-bdfb-b1d372846404\") " Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.201601 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zn2tm\" (UniqueName: \"kubernetes.io/projected/a5a72256-21cc-42e2-bdfb-b1d372846404-kube-api-access-zn2tm\") pod \"a5a72256-21cc-42e2-bdfb-b1d372846404\" (UID: \"a5a72256-21cc-42e2-bdfb-b1d372846404\") " Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.206437 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5a72256-21cc-42e2-bdfb-b1d372846404-kube-api-access-zn2tm" (OuterVolumeSpecName: "kube-api-access-zn2tm") pod "a5a72256-21cc-42e2-bdfb-b1d372846404" (UID: "a5a72256-21cc-42e2-bdfb-b1d372846404"). InnerVolumeSpecName "kube-api-access-zn2tm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.214052 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.239446 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5a72256-21cc-42e2-bdfb-b1d372846404-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a5a72256-21cc-42e2-bdfb-b1d372846404" (UID: "a5a72256-21cc-42e2-bdfb-b1d372846404"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.259195 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.265726 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5a72256-21cc-42e2-bdfb-b1d372846404-config" (OuterVolumeSpecName: "config") pod "a5a72256-21cc-42e2-bdfb-b1d372846404" (UID: "a5a72256-21cc-42e2-bdfb-b1d372846404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.266190 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5a72256-21cc-42e2-bdfb-b1d372846404-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a5a72256-21cc-42e2-bdfb-b1d372846404" (UID: "a5a72256-21cc-42e2-bdfb-b1d372846404"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.269664 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5a72256-21cc-42e2-bdfb-b1d372846404-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a5a72256-21cc-42e2-bdfb-b1d372846404" (UID: "a5a72256-21cc-42e2-bdfb-b1d372846404"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.272844 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5a72256-21cc-42e2-bdfb-b1d372846404-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a5a72256-21cc-42e2-bdfb-b1d372846404" (UID: "a5a72256-21cc-42e2-bdfb-b1d372846404"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.305309 4747 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5a72256-21cc-42e2-bdfb-b1d372846404-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.305340 4747 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a5a72256-21cc-42e2-bdfb-b1d372846404-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.305352 4747 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a5a72256-21cc-42e2-bdfb-b1d372846404-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.305361 4747 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a5a72256-21cc-42e2-bdfb-b1d372846404-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.305374 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5a72256-21cc-42e2-bdfb-b1d372846404-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.305382 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zn2tm\" (UniqueName: \"kubernetes.io/projected/a5a72256-21cc-42e2-bdfb-b1d372846404-kube-api-access-zn2tm\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.406693 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a091e29-fc34-4a0a-951c-8662c11fa61e-nova-metadata-tls-certs\") pod \"0a091e29-fc34-4a0a-951c-8662c11fa61e\" (UID: \"0a091e29-fc34-4a0a-951c-8662c11fa61e\") " Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.406751 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a091e29-fc34-4a0a-951c-8662c11fa61e-logs\") pod \"0a091e29-fc34-4a0a-951c-8662c11fa61e\" (UID: \"0a091e29-fc34-4a0a-951c-8662c11fa61e\") " Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.406806 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxqsv\" (UniqueName: \"kubernetes.io/projected/0a091e29-fc34-4a0a-951c-8662c11fa61e-kube-api-access-xxqsv\") pod \"0a091e29-fc34-4a0a-951c-8662c11fa61e\" (UID: \"0a091e29-fc34-4a0a-951c-8662c11fa61e\") " Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.406867 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a091e29-fc34-4a0a-951c-8662c11fa61e-combined-ca-bundle\") pod \"0a091e29-fc34-4a0a-951c-8662c11fa61e\" (UID: \"0a091e29-fc34-4a0a-951c-8662c11fa61e\") " Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.406909 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a091e29-fc34-4a0a-951c-8662c11fa61e-config-data\") pod \"0a091e29-fc34-4a0a-951c-8662c11fa61e\" (UID: \"0a091e29-fc34-4a0a-951c-8662c11fa61e\") " Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.407843 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a091e29-fc34-4a0a-951c-8662c11fa61e-logs" (OuterVolumeSpecName: "logs") pod "0a091e29-fc34-4a0a-951c-8662c11fa61e" (UID: "0a091e29-fc34-4a0a-951c-8662c11fa61e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.415005 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a091e29-fc34-4a0a-951c-8662c11fa61e-kube-api-access-xxqsv" (OuterVolumeSpecName: "kube-api-access-xxqsv") pod "0a091e29-fc34-4a0a-951c-8662c11fa61e" (UID: "0a091e29-fc34-4a0a-951c-8662c11fa61e"). InnerVolumeSpecName "kube-api-access-xxqsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.433250 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a091e29-fc34-4a0a-951c-8662c11fa61e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a091e29-fc34-4a0a-951c-8662c11fa61e" (UID: "0a091e29-fc34-4a0a-951c-8662c11fa61e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.438711 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a091e29-fc34-4a0a-951c-8662c11fa61e-config-data" (OuterVolumeSpecName: "config-data") pod "0a091e29-fc34-4a0a-951c-8662c11fa61e" (UID: "0a091e29-fc34-4a0a-951c-8662c11fa61e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.460345 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a091e29-fc34-4a0a-951c-8662c11fa61e-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "0a091e29-fc34-4a0a-951c-8662c11fa61e" (UID: "0a091e29-fc34-4a0a-951c-8662c11fa61e"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.509554 4747 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a091e29-fc34-4a0a-951c-8662c11fa61e-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.509598 4747 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a091e29-fc34-4a0a-951c-8662c11fa61e-logs\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.509612 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxqsv\" (UniqueName: \"kubernetes.io/projected/0a091e29-fc34-4a0a-951c-8662c11fa61e-kube-api-access-xxqsv\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.509624 4747 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a091e29-fc34-4a0a-951c-8662c11fa61e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.509640 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a091e29-fc34-4a0a-951c-8662c11fa61e-config-data\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.688832 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 15 05:54:35 crc kubenswrapper[4747]: W1215 05:54:35.690411 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84d03e88_8521_4da8_9d5b_28185ed47abc.slice/crio-87115b9605f48b73ccb47da85f0b108ea75dab5bfe03d99002f8cb116089cad8 WatchSource:0}: Error finding container 87115b9605f48b73ccb47da85f0b108ea75dab5bfe03d99002f8cb116089cad8: Status 404 returned error can't find the container with id 87115b9605f48b73ccb47da85f0b108ea75dab5bfe03d99002f8cb116089cad8 Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.762791 4747 generic.go:334] "Generic (PLEG): container finished" podID="0a091e29-fc34-4a0a-951c-8662c11fa61e" containerID="279ba980f1ba780b63a7f73d44522079261b4dec6abc07a60a197b54f93230e6" exitCode=0 Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.763055 4747 generic.go:334] "Generic (PLEG): container finished" podID="0a091e29-fc34-4a0a-951c-8662c11fa61e" containerID="54173d1e4cfee35dde023955bfaeb8964ac6f4c562f33871c9fcd74f060493e9" exitCode=143 Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.762917 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.762879 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0a091e29-fc34-4a0a-951c-8662c11fa61e","Type":"ContainerDied","Data":"279ba980f1ba780b63a7f73d44522079261b4dec6abc07a60a197b54f93230e6"} Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.771864 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0a091e29-fc34-4a0a-951c-8662c11fa61e","Type":"ContainerDied","Data":"54173d1e4cfee35dde023955bfaeb8964ac6f4c562f33871c9fcd74f060493e9"} Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.771880 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0a091e29-fc34-4a0a-951c-8662c11fa61e","Type":"ContainerDied","Data":"38d9576d5495a1d6f1d0e2b85dafe51e2678099e9ebc3a7e5276a81299c60d41"} Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.771946 4747 scope.go:117] "RemoveContainer" containerID="279ba980f1ba780b63a7f73d44522079261b4dec6abc07a60a197b54f93230e6" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.772532 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7cb287d9-6fdc-4ecd-80aa-fff39ebd8679","Type":"ContainerStarted","Data":"ba6633aba93f8cebbb1780644b8692676db07b155bbb4603183556d3653c8737"} Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.772557 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7cb287d9-6fdc-4ecd-80aa-fff39ebd8679","Type":"ContainerStarted","Data":"debb15f0daf261dd60168c746ccc67c94ea7902c78f54185ae090f32261cfcfe"} Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.776609 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-695946c66c-cs66k" event={"ID":"a5a72256-21cc-42e2-bdfb-b1d372846404","Type":"ContainerDied","Data":"b94198a122e1e6a5915f96cc2fc6bc507b23eaef1eddb20a810e6bd9d64a380b"} Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.776670 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-695946c66c-cs66k" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.782586 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"84d03e88-8521-4da8-9d5b-28185ed47abc","Type":"ContainerStarted","Data":"87115b9605f48b73ccb47da85f0b108ea75dab5bfe03d99002f8cb116089cad8"} Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.802757 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.802733329 podStartE2EDuration="2.802733329s" podCreationTimestamp="2025-12-15 05:54:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:54:35.790911789 +0000 UTC m=+1039.487423706" watchObservedRunningTime="2025-12-15 05:54:35.802733329 +0000 UTC m=+1039.499245246" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.812074 4747 scope.go:117] "RemoveContainer" containerID="54173d1e4cfee35dde023955bfaeb8964ac6f4c562f33871c9fcd74f060493e9" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.829875 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.842240 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.848481 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-695946c66c-cs66k"] Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.852974 4747 scope.go:117] "RemoveContainer" containerID="279ba980f1ba780b63a7f73d44522079261b4dec6abc07a60a197b54f93230e6" Dec 15 05:54:35 crc kubenswrapper[4747]: E1215 05:54:35.853410 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"279ba980f1ba780b63a7f73d44522079261b4dec6abc07a60a197b54f93230e6\": container with ID starting with 279ba980f1ba780b63a7f73d44522079261b4dec6abc07a60a197b54f93230e6 not found: ID does not exist" containerID="279ba980f1ba780b63a7f73d44522079261b4dec6abc07a60a197b54f93230e6" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.853447 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"279ba980f1ba780b63a7f73d44522079261b4dec6abc07a60a197b54f93230e6"} err="failed to get container status \"279ba980f1ba780b63a7f73d44522079261b4dec6abc07a60a197b54f93230e6\": rpc error: code = NotFound desc = could not find container \"279ba980f1ba780b63a7f73d44522079261b4dec6abc07a60a197b54f93230e6\": container with ID starting with 279ba980f1ba780b63a7f73d44522079261b4dec6abc07a60a197b54f93230e6 not found: ID does not exist" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.853485 4747 scope.go:117] "RemoveContainer" containerID="54173d1e4cfee35dde023955bfaeb8964ac6f4c562f33871c9fcd74f060493e9" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.853551 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 15 05:54:35 crc kubenswrapper[4747]: E1215 05:54:35.854443 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5a72256-21cc-42e2-bdfb-b1d372846404" containerName="init" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.854469 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5a72256-21cc-42e2-bdfb-b1d372846404" containerName="init" Dec 15 05:54:35 crc kubenswrapper[4747]: E1215 05:54:35.854489 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5a72256-21cc-42e2-bdfb-b1d372846404" containerName="dnsmasq-dns" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.854496 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5a72256-21cc-42e2-bdfb-b1d372846404" containerName="dnsmasq-dns" Dec 15 05:54:35 crc kubenswrapper[4747]: E1215 05:54:35.854527 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a091e29-fc34-4a0a-951c-8662c11fa61e" containerName="nova-metadata-metadata" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.854532 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a091e29-fc34-4a0a-951c-8662c11fa61e" containerName="nova-metadata-metadata" Dec 15 05:54:35 crc kubenswrapper[4747]: E1215 05:54:35.854546 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a091e29-fc34-4a0a-951c-8662c11fa61e" containerName="nova-metadata-log" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.854553 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a091e29-fc34-4a0a-951c-8662c11fa61e" containerName="nova-metadata-log" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.854759 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a091e29-fc34-4a0a-951c-8662c11fa61e" containerName="nova-metadata-log" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.854795 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a091e29-fc34-4a0a-951c-8662c11fa61e" containerName="nova-metadata-metadata" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.854807 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5a72256-21cc-42e2-bdfb-b1d372846404" containerName="dnsmasq-dns" Dec 15 05:54:35 crc kubenswrapper[4747]: E1215 05:54:35.863780 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54173d1e4cfee35dde023955bfaeb8964ac6f4c562f33871c9fcd74f060493e9\": container with ID starting with 54173d1e4cfee35dde023955bfaeb8964ac6f4c562f33871c9fcd74f060493e9 not found: ID does not exist" containerID="54173d1e4cfee35dde023955bfaeb8964ac6f4c562f33871c9fcd74f060493e9" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.863826 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54173d1e4cfee35dde023955bfaeb8964ac6f4c562f33871c9fcd74f060493e9"} err="failed to get container status \"54173d1e4cfee35dde023955bfaeb8964ac6f4c562f33871c9fcd74f060493e9\": rpc error: code = NotFound desc = could not find container \"54173d1e4cfee35dde023955bfaeb8964ac6f4c562f33871c9fcd74f060493e9\": container with ID starting with 54173d1e4cfee35dde023955bfaeb8964ac6f4c562f33871c9fcd74f060493e9 not found: ID does not exist" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.863855 4747 scope.go:117] "RemoveContainer" containerID="279ba980f1ba780b63a7f73d44522079261b4dec6abc07a60a197b54f93230e6" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.865385 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"279ba980f1ba780b63a7f73d44522079261b4dec6abc07a60a197b54f93230e6"} err="failed to get container status \"279ba980f1ba780b63a7f73d44522079261b4dec6abc07a60a197b54f93230e6\": rpc error: code = NotFound desc = could not find container \"279ba980f1ba780b63a7f73d44522079261b4dec6abc07a60a197b54f93230e6\": container with ID starting with 279ba980f1ba780b63a7f73d44522079261b4dec6abc07a60a197b54f93230e6 not found: ID does not exist" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.865409 4747 scope.go:117] "RemoveContainer" containerID="54173d1e4cfee35dde023955bfaeb8964ac6f4c562f33871c9fcd74f060493e9" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.865576 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.865907 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54173d1e4cfee35dde023955bfaeb8964ac6f4c562f33871c9fcd74f060493e9"} err="failed to get container status \"54173d1e4cfee35dde023955bfaeb8964ac6f4c562f33871c9fcd74f060493e9\": rpc error: code = NotFound desc = could not find container \"54173d1e4cfee35dde023955bfaeb8964ac6f4c562f33871c9fcd74f060493e9\": container with ID starting with 54173d1e4cfee35dde023955bfaeb8964ac6f4c562f33871c9fcd74f060493e9 not found: ID does not exist" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.865956 4747 scope.go:117] "RemoveContainer" containerID="a9aeda417c28e50e39fd3ad75c3cf3ffd9ceaed4277e8b7caa6adbd17ade0f1e" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.867688 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.868110 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.883583 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-695946c66c-cs66k"] Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.897322 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.918502 4747 scope.go:117] "RemoveContainer" containerID="a1652dd51f7a8891b34de564181e5e71553328b1c275de1fecb85666487ab0e4" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.930317 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e335bdf2-e70d-4140-835b-d8071700c0f6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e335bdf2-e70d-4140-835b-d8071700c0f6\") " pod="openstack/nova-metadata-0" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.930513 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqpfc\" (UniqueName: \"kubernetes.io/projected/e335bdf2-e70d-4140-835b-d8071700c0f6-kube-api-access-bqpfc\") pod \"nova-metadata-0\" (UID: \"e335bdf2-e70d-4140-835b-d8071700c0f6\") " pod="openstack/nova-metadata-0" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.932829 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e335bdf2-e70d-4140-835b-d8071700c0f6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e335bdf2-e70d-4140-835b-d8071700c0f6\") " pod="openstack/nova-metadata-0" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.932887 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e335bdf2-e70d-4140-835b-d8071700c0f6-config-data\") pod \"nova-metadata-0\" (UID: \"e335bdf2-e70d-4140-835b-d8071700c0f6\") " pod="openstack/nova-metadata-0" Dec 15 05:54:35 crc kubenswrapper[4747]: I1215 05:54:35.933015 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e335bdf2-e70d-4140-835b-d8071700c0f6-logs\") pod \"nova-metadata-0\" (UID: \"e335bdf2-e70d-4140-835b-d8071700c0f6\") " pod="openstack/nova-metadata-0" Dec 15 05:54:36 crc kubenswrapper[4747]: I1215 05:54:36.034401 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e335bdf2-e70d-4140-835b-d8071700c0f6-logs\") pod \"nova-metadata-0\" (UID: \"e335bdf2-e70d-4140-835b-d8071700c0f6\") " pod="openstack/nova-metadata-0" Dec 15 05:54:36 crc kubenswrapper[4747]: I1215 05:54:36.034579 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e335bdf2-e70d-4140-835b-d8071700c0f6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e335bdf2-e70d-4140-835b-d8071700c0f6\") " pod="openstack/nova-metadata-0" Dec 15 05:54:36 crc kubenswrapper[4747]: I1215 05:54:36.034676 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqpfc\" (UniqueName: \"kubernetes.io/projected/e335bdf2-e70d-4140-835b-d8071700c0f6-kube-api-access-bqpfc\") pod \"nova-metadata-0\" (UID: \"e335bdf2-e70d-4140-835b-d8071700c0f6\") " pod="openstack/nova-metadata-0" Dec 15 05:54:36 crc kubenswrapper[4747]: I1215 05:54:36.034736 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e335bdf2-e70d-4140-835b-d8071700c0f6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e335bdf2-e70d-4140-835b-d8071700c0f6\") " pod="openstack/nova-metadata-0" Dec 15 05:54:36 crc kubenswrapper[4747]: I1215 05:54:36.034765 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e335bdf2-e70d-4140-835b-d8071700c0f6-config-data\") pod \"nova-metadata-0\" (UID: \"e335bdf2-e70d-4140-835b-d8071700c0f6\") " pod="openstack/nova-metadata-0" Dec 15 05:54:36 crc kubenswrapper[4747]: I1215 05:54:36.035654 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e335bdf2-e70d-4140-835b-d8071700c0f6-logs\") pod \"nova-metadata-0\" (UID: \"e335bdf2-e70d-4140-835b-d8071700c0f6\") " pod="openstack/nova-metadata-0" Dec 15 05:54:36 crc kubenswrapper[4747]: I1215 05:54:36.038544 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e335bdf2-e70d-4140-835b-d8071700c0f6-config-data\") pod \"nova-metadata-0\" (UID: \"e335bdf2-e70d-4140-835b-d8071700c0f6\") " pod="openstack/nova-metadata-0" Dec 15 05:54:36 crc kubenswrapper[4747]: I1215 05:54:36.039092 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 15 05:54:36 crc kubenswrapper[4747]: I1215 05:54:36.039197 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e335bdf2-e70d-4140-835b-d8071700c0f6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e335bdf2-e70d-4140-835b-d8071700c0f6\") " pod="openstack/nova-metadata-0" Dec 15 05:54:36 crc kubenswrapper[4747]: I1215 05:54:36.039569 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e335bdf2-e70d-4140-835b-d8071700c0f6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e335bdf2-e70d-4140-835b-d8071700c0f6\") " pod="openstack/nova-metadata-0" Dec 15 05:54:36 crc kubenswrapper[4747]: I1215 05:54:36.056374 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqpfc\" (UniqueName: \"kubernetes.io/projected/e335bdf2-e70d-4140-835b-d8071700c0f6-kube-api-access-bqpfc\") pod \"nova-metadata-0\" (UID: \"e335bdf2-e70d-4140-835b-d8071700c0f6\") " pod="openstack/nova-metadata-0" Dec 15 05:54:36 crc kubenswrapper[4747]: I1215 05:54:36.180311 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 15 05:54:36 crc kubenswrapper[4747]: I1215 05:54:36.616796 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 15 05:54:36 crc kubenswrapper[4747]: I1215 05:54:36.642953 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a091e29-fc34-4a0a-951c-8662c11fa61e" path="/var/lib/kubelet/pods/0a091e29-fc34-4a0a-951c-8662c11fa61e/volumes" Dec 15 05:54:36 crc kubenswrapper[4747]: I1215 05:54:36.643559 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="217ecd6d-89af-4db7-a261-ec968737d482" path="/var/lib/kubelet/pods/217ecd6d-89af-4db7-a261-ec968737d482/volumes" Dec 15 05:54:36 crc kubenswrapper[4747]: I1215 05:54:36.644133 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5a72256-21cc-42e2-bdfb-b1d372846404" path="/var/lib/kubelet/pods/a5a72256-21cc-42e2-bdfb-b1d372846404/volumes" Dec 15 05:54:36 crc kubenswrapper[4747]: I1215 05:54:36.802515 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e335bdf2-e70d-4140-835b-d8071700c0f6","Type":"ContainerStarted","Data":"be188d4f2d55417a5e25ac92abbb09ed50adc4bff835f4e9b58e7a4bbe8349fd"} Dec 15 05:54:36 crc kubenswrapper[4747]: I1215 05:54:36.803776 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e335bdf2-e70d-4140-835b-d8071700c0f6","Type":"ContainerStarted","Data":"05ff72bb97849d974e90976380dac3e60e5c8bae881084e3847e644b2bada91a"} Dec 15 05:54:36 crc kubenswrapper[4747]: I1215 05:54:36.805109 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"84d03e88-8521-4da8-9d5b-28185ed47abc","Type":"ContainerStarted","Data":"dc26235c0c1c670c3bd6d437305a8521cc2ad981febbb96ad43cc43801fc49cf"} Dec 15 05:54:36 crc kubenswrapper[4747]: I1215 05:54:36.841384 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.84135324 podStartE2EDuration="2.84135324s" podCreationTimestamp="2025-12-15 05:54:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:54:36.821036728 +0000 UTC m=+1040.517548655" watchObservedRunningTime="2025-12-15 05:54:36.84135324 +0000 UTC m=+1040.537865157" Dec 15 05:54:37 crc kubenswrapper[4747]: I1215 05:54:37.818983 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e335bdf2-e70d-4140-835b-d8071700c0f6","Type":"ContainerStarted","Data":"213ef4370571a38ac03e4905dce468332d87713c7789e9d62290d84bc8b62135"} Dec 15 05:54:37 crc kubenswrapper[4747]: I1215 05:54:37.842312 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.842292511 podStartE2EDuration="2.842292511s" podCreationTimestamp="2025-12-15 05:54:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:54:37.837408768 +0000 UTC m=+1041.533920685" watchObservedRunningTime="2025-12-15 05:54:37.842292511 +0000 UTC m=+1041.538804428" Dec 15 05:54:38 crc kubenswrapper[4747]: I1215 05:54:38.105810 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 15 05:54:39 crc kubenswrapper[4747]: I1215 05:54:39.273234 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 15 05:54:39 crc kubenswrapper[4747]: I1215 05:54:39.273712 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="9180543c-2a85-4639-82ac-7180f7a1274c" containerName="kube-state-metrics" containerID="cri-o://d7ce6367fcfd9a641aaca2a50cf1a049d3e7f84c489a9d2eaf31d66c0384e4eb" gracePeriod=30 Dec 15 05:54:39 crc kubenswrapper[4747]: I1215 05:54:39.691105 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 15 05:54:39 crc kubenswrapper[4747]: I1215 05:54:39.816162 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rv4nb\" (UniqueName: \"kubernetes.io/projected/9180543c-2a85-4639-82ac-7180f7a1274c-kube-api-access-rv4nb\") pod \"9180543c-2a85-4639-82ac-7180f7a1274c\" (UID: \"9180543c-2a85-4639-82ac-7180f7a1274c\") " Dec 15 05:54:39 crc kubenswrapper[4747]: I1215 05:54:39.822589 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9180543c-2a85-4639-82ac-7180f7a1274c-kube-api-access-rv4nb" (OuterVolumeSpecName: "kube-api-access-rv4nb") pod "9180543c-2a85-4639-82ac-7180f7a1274c" (UID: "9180543c-2a85-4639-82ac-7180f7a1274c"). InnerVolumeSpecName "kube-api-access-rv4nb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:54:39 crc kubenswrapper[4747]: I1215 05:54:39.836018 4747 generic.go:334] "Generic (PLEG): container finished" podID="9180543c-2a85-4639-82ac-7180f7a1274c" containerID="d7ce6367fcfd9a641aaca2a50cf1a049d3e7f84c489a9d2eaf31d66c0384e4eb" exitCode=2 Dec 15 05:54:39 crc kubenswrapper[4747]: I1215 05:54:39.836057 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9180543c-2a85-4639-82ac-7180f7a1274c","Type":"ContainerDied","Data":"d7ce6367fcfd9a641aaca2a50cf1a049d3e7f84c489a9d2eaf31d66c0384e4eb"} Dec 15 05:54:39 crc kubenswrapper[4747]: I1215 05:54:39.836063 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 15 05:54:39 crc kubenswrapper[4747]: I1215 05:54:39.836084 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9180543c-2a85-4639-82ac-7180f7a1274c","Type":"ContainerDied","Data":"b90a9f8864e43cfc819d42c4d34e7f93f40cbf2f5b999ec9feca0b5284da0d1a"} Dec 15 05:54:39 crc kubenswrapper[4747]: I1215 05:54:39.836104 4747 scope.go:117] "RemoveContainer" containerID="d7ce6367fcfd9a641aaca2a50cf1a049d3e7f84c489a9d2eaf31d66c0384e4eb" Dec 15 05:54:39 crc kubenswrapper[4747]: I1215 05:54:39.863546 4747 scope.go:117] "RemoveContainer" containerID="d7ce6367fcfd9a641aaca2a50cf1a049d3e7f84c489a9d2eaf31d66c0384e4eb" Dec 15 05:54:39 crc kubenswrapper[4747]: E1215 05:54:39.863848 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7ce6367fcfd9a641aaca2a50cf1a049d3e7f84c489a9d2eaf31d66c0384e4eb\": container with ID starting with d7ce6367fcfd9a641aaca2a50cf1a049d3e7f84c489a9d2eaf31d66c0384e4eb not found: ID does not exist" containerID="d7ce6367fcfd9a641aaca2a50cf1a049d3e7f84c489a9d2eaf31d66c0384e4eb" Dec 15 05:54:39 crc kubenswrapper[4747]: I1215 05:54:39.863881 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7ce6367fcfd9a641aaca2a50cf1a049d3e7f84c489a9d2eaf31d66c0384e4eb"} err="failed to get container status \"d7ce6367fcfd9a641aaca2a50cf1a049d3e7f84c489a9d2eaf31d66c0384e4eb\": rpc error: code = NotFound desc = could not find container \"d7ce6367fcfd9a641aaca2a50cf1a049d3e7f84c489a9d2eaf31d66c0384e4eb\": container with ID starting with d7ce6367fcfd9a641aaca2a50cf1a049d3e7f84c489a9d2eaf31d66c0384e4eb not found: ID does not exist" Dec 15 05:54:39 crc kubenswrapper[4747]: I1215 05:54:39.876029 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 15 05:54:39 crc kubenswrapper[4747]: I1215 05:54:39.883292 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 15 05:54:39 crc kubenswrapper[4747]: I1215 05:54:39.893184 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 15 05:54:39 crc kubenswrapper[4747]: E1215 05:54:39.894726 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9180543c-2a85-4639-82ac-7180f7a1274c" containerName="kube-state-metrics" Dec 15 05:54:39 crc kubenswrapper[4747]: I1215 05:54:39.894750 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="9180543c-2a85-4639-82ac-7180f7a1274c" containerName="kube-state-metrics" Dec 15 05:54:39 crc kubenswrapper[4747]: I1215 05:54:39.894950 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="9180543c-2a85-4639-82ac-7180f7a1274c" containerName="kube-state-metrics" Dec 15 05:54:39 crc kubenswrapper[4747]: I1215 05:54:39.896996 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 15 05:54:39 crc kubenswrapper[4747]: I1215 05:54:39.900467 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Dec 15 05:54:39 crc kubenswrapper[4747]: I1215 05:54:39.900619 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Dec 15 05:54:39 crc kubenswrapper[4747]: I1215 05:54:39.905589 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 15 05:54:39 crc kubenswrapper[4747]: I1215 05:54:39.918627 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rv4nb\" (UniqueName: \"kubernetes.io/projected/9180543c-2a85-4639-82ac-7180f7a1274c-kube-api-access-rv4nb\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:40 crc kubenswrapper[4747]: I1215 05:54:40.021681 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/39eb3298-a864-45a5-b1a1-df263390967d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"39eb3298-a864-45a5-b1a1-df263390967d\") " pod="openstack/kube-state-metrics-0" Dec 15 05:54:40 crc kubenswrapper[4747]: I1215 05:54:40.022595 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqpmz\" (UniqueName: \"kubernetes.io/projected/39eb3298-a864-45a5-b1a1-df263390967d-kube-api-access-pqpmz\") pod \"kube-state-metrics-0\" (UID: \"39eb3298-a864-45a5-b1a1-df263390967d\") " pod="openstack/kube-state-metrics-0" Dec 15 05:54:40 crc kubenswrapper[4747]: I1215 05:54:40.022706 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39eb3298-a864-45a5-b1a1-df263390967d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"39eb3298-a864-45a5-b1a1-df263390967d\") " pod="openstack/kube-state-metrics-0" Dec 15 05:54:40 crc kubenswrapper[4747]: I1215 05:54:40.022837 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/39eb3298-a864-45a5-b1a1-df263390967d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"39eb3298-a864-45a5-b1a1-df263390967d\") " pod="openstack/kube-state-metrics-0" Dec 15 05:54:40 crc kubenswrapper[4747]: I1215 05:54:40.125402 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/39eb3298-a864-45a5-b1a1-df263390967d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"39eb3298-a864-45a5-b1a1-df263390967d\") " pod="openstack/kube-state-metrics-0" Dec 15 05:54:40 crc kubenswrapper[4747]: I1215 05:54:40.125996 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/39eb3298-a864-45a5-b1a1-df263390967d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"39eb3298-a864-45a5-b1a1-df263390967d\") " pod="openstack/kube-state-metrics-0" Dec 15 05:54:40 crc kubenswrapper[4747]: I1215 05:54:40.126273 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqpmz\" (UniqueName: \"kubernetes.io/projected/39eb3298-a864-45a5-b1a1-df263390967d-kube-api-access-pqpmz\") pod \"kube-state-metrics-0\" (UID: \"39eb3298-a864-45a5-b1a1-df263390967d\") " pod="openstack/kube-state-metrics-0" Dec 15 05:54:40 crc kubenswrapper[4747]: I1215 05:54:40.126397 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39eb3298-a864-45a5-b1a1-df263390967d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"39eb3298-a864-45a5-b1a1-df263390967d\") " pod="openstack/kube-state-metrics-0" Dec 15 05:54:40 crc kubenswrapper[4747]: I1215 05:54:40.141018 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/39eb3298-a864-45a5-b1a1-df263390967d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"39eb3298-a864-45a5-b1a1-df263390967d\") " pod="openstack/kube-state-metrics-0" Dec 15 05:54:40 crc kubenswrapper[4747]: I1215 05:54:40.149546 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39eb3298-a864-45a5-b1a1-df263390967d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"39eb3298-a864-45a5-b1a1-df263390967d\") " pod="openstack/kube-state-metrics-0" Dec 15 05:54:40 crc kubenswrapper[4747]: I1215 05:54:40.152425 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/39eb3298-a864-45a5-b1a1-df263390967d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"39eb3298-a864-45a5-b1a1-df263390967d\") " pod="openstack/kube-state-metrics-0" Dec 15 05:54:40 crc kubenswrapper[4747]: I1215 05:54:40.164470 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqpmz\" (UniqueName: \"kubernetes.io/projected/39eb3298-a864-45a5-b1a1-df263390967d-kube-api-access-pqpmz\") pod \"kube-state-metrics-0\" (UID: \"39eb3298-a864-45a5-b1a1-df263390967d\") " pod="openstack/kube-state-metrics-0" Dec 15 05:54:40 crc kubenswrapper[4747]: I1215 05:54:40.213167 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 15 05:54:40 crc kubenswrapper[4747]: I1215 05:54:40.215565 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 15 05:54:40 crc kubenswrapper[4747]: I1215 05:54:40.641775 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9180543c-2a85-4639-82ac-7180f7a1274c" path="/var/lib/kubelet/pods/9180543c-2a85-4639-82ac-7180f7a1274c/volumes" Dec 15 05:54:40 crc kubenswrapper[4747]: I1215 05:54:40.645526 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 15 05:54:40 crc kubenswrapper[4747]: I1215 05:54:40.845203 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"39eb3298-a864-45a5-b1a1-df263390967d","Type":"ContainerStarted","Data":"d37e10623179d8a6cf7ef7706bb14b6e3f852ccaf3c1b524bd934c7563d8a49f"} Dec 15 05:54:41 crc kubenswrapper[4747]: I1215 05:54:41.039269 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 15 05:54:41 crc kubenswrapper[4747]: I1215 05:54:41.066334 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 15 05:54:41 crc kubenswrapper[4747]: I1215 05:54:41.180567 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 15 05:54:41 crc kubenswrapper[4747]: I1215 05:54:41.180620 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 15 05:54:41 crc kubenswrapper[4747]: I1215 05:54:41.535627 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 15 05:54:41 crc kubenswrapper[4747]: I1215 05:54:41.536026 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1afd3938-1da2-4c72-811d-fc9ec8f21171" containerName="sg-core" containerID="cri-o://ed889bc211ce461cf1e85e83fd6d860227dd9d95280f19c1142a58866f4157ab" gracePeriod=30 Dec 15 05:54:41 crc kubenswrapper[4747]: I1215 05:54:41.536079 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1afd3938-1da2-4c72-811d-fc9ec8f21171" containerName="proxy-httpd" containerID="cri-o://b9d1deb8110d848600fee7fc0455b915cf4cffe26ae7bc3ce122843396715989" gracePeriod=30 Dec 15 05:54:41 crc kubenswrapper[4747]: I1215 05:54:41.536087 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1afd3938-1da2-4c72-811d-fc9ec8f21171" containerName="ceilometer-notification-agent" containerID="cri-o://36cfbd60223f11dd0e6dbbcecb56b0c80f0a9300545de3f3ec074131c3e00be6" gracePeriod=30 Dec 15 05:54:41 crc kubenswrapper[4747]: I1215 05:54:41.536260 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1afd3938-1da2-4c72-811d-fc9ec8f21171" containerName="ceilometer-central-agent" containerID="cri-o://a30fa25ec1ba0c2528ed748b740e0d33225cec181c2ae331cdf729efe943fbcf" gracePeriod=30 Dec 15 05:54:41 crc kubenswrapper[4747]: I1215 05:54:41.863865 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"39eb3298-a864-45a5-b1a1-df263390967d","Type":"ContainerStarted","Data":"e1c1ac0ba765309d4b2da3466dbdfcaef00fc397c1730afd0e97459a1ca91563"} Dec 15 05:54:41 crc kubenswrapper[4747]: I1215 05:54:41.864719 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 15 05:54:41 crc kubenswrapper[4747]: I1215 05:54:41.869002 4747 generic.go:334] "Generic (PLEG): container finished" podID="1afd3938-1da2-4c72-811d-fc9ec8f21171" containerID="b9d1deb8110d848600fee7fc0455b915cf4cffe26ae7bc3ce122843396715989" exitCode=0 Dec 15 05:54:41 crc kubenswrapper[4747]: I1215 05:54:41.869094 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1afd3938-1da2-4c72-811d-fc9ec8f21171","Type":"ContainerDied","Data":"b9d1deb8110d848600fee7fc0455b915cf4cffe26ae7bc3ce122843396715989"} Dec 15 05:54:41 crc kubenswrapper[4747]: I1215 05:54:41.869182 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1afd3938-1da2-4c72-811d-fc9ec8f21171","Type":"ContainerDied","Data":"ed889bc211ce461cf1e85e83fd6d860227dd9d95280f19c1142a58866f4157ab"} Dec 15 05:54:41 crc kubenswrapper[4747]: I1215 05:54:41.869118 4747 generic.go:334] "Generic (PLEG): container finished" podID="1afd3938-1da2-4c72-811d-fc9ec8f21171" containerID="ed889bc211ce461cf1e85e83fd6d860227dd9d95280f19c1142a58866f4157ab" exitCode=2 Dec 15 05:54:41 crc kubenswrapper[4747]: I1215 05:54:41.869230 4747 generic.go:334] "Generic (PLEG): container finished" podID="1afd3938-1da2-4c72-811d-fc9ec8f21171" containerID="a30fa25ec1ba0c2528ed748b740e0d33225cec181c2ae331cdf729efe943fbcf" exitCode=0 Dec 15 05:54:41 crc kubenswrapper[4747]: I1215 05:54:41.869565 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1afd3938-1da2-4c72-811d-fc9ec8f21171","Type":"ContainerDied","Data":"a30fa25ec1ba0c2528ed748b740e0d33225cec181c2ae331cdf729efe943fbcf"} Dec 15 05:54:41 crc kubenswrapper[4747]: I1215 05:54:41.892280 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 15 05:54:41 crc kubenswrapper[4747]: I1215 05:54:41.892389 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.614079812 podStartE2EDuration="2.892376207s" podCreationTimestamp="2025-12-15 05:54:39 +0000 UTC" firstStartedPulling="2025-12-15 05:54:40.658264548 +0000 UTC m=+1044.354776466" lastFinishedPulling="2025-12-15 05:54:40.936560944 +0000 UTC m=+1044.633072861" observedRunningTime="2025-12-15 05:54:41.882877054 +0000 UTC m=+1045.579388972" watchObservedRunningTime="2025-12-15 05:54:41.892376207 +0000 UTC m=+1045.588888123" Dec 15 05:54:42 crc kubenswrapper[4747]: I1215 05:54:42.066374 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-vxw6d"] Dec 15 05:54:42 crc kubenswrapper[4747]: I1215 05:54:42.067497 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-vxw6d" Dec 15 05:54:42 crc kubenswrapper[4747]: I1215 05:54:42.069498 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Dec 15 05:54:42 crc kubenswrapper[4747]: I1215 05:54:42.069747 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Dec 15 05:54:42 crc kubenswrapper[4747]: I1215 05:54:42.073866 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b819ab60-7712-47d0-853b-4ae39eb770b1-scripts\") pod \"nova-cell1-cell-mapping-vxw6d\" (UID: \"b819ab60-7712-47d0-853b-4ae39eb770b1\") " pod="openstack/nova-cell1-cell-mapping-vxw6d" Dec 15 05:54:42 crc kubenswrapper[4747]: I1215 05:54:42.073955 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b819ab60-7712-47d0-853b-4ae39eb770b1-config-data\") pod \"nova-cell1-cell-mapping-vxw6d\" (UID: \"b819ab60-7712-47d0-853b-4ae39eb770b1\") " pod="openstack/nova-cell1-cell-mapping-vxw6d" Dec 15 05:54:42 crc kubenswrapper[4747]: I1215 05:54:42.074011 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fk2n\" (UniqueName: \"kubernetes.io/projected/b819ab60-7712-47d0-853b-4ae39eb770b1-kube-api-access-8fk2n\") pod \"nova-cell1-cell-mapping-vxw6d\" (UID: \"b819ab60-7712-47d0-853b-4ae39eb770b1\") " pod="openstack/nova-cell1-cell-mapping-vxw6d" Dec 15 05:54:42 crc kubenswrapper[4747]: I1215 05:54:42.074048 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b819ab60-7712-47d0-853b-4ae39eb770b1-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-vxw6d\" (UID: \"b819ab60-7712-47d0-853b-4ae39eb770b1\") " pod="openstack/nova-cell1-cell-mapping-vxw6d" Dec 15 05:54:42 crc kubenswrapper[4747]: I1215 05:54:42.079528 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-vxw6d"] Dec 15 05:54:42 crc kubenswrapper[4747]: I1215 05:54:42.176824 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fk2n\" (UniqueName: \"kubernetes.io/projected/b819ab60-7712-47d0-853b-4ae39eb770b1-kube-api-access-8fk2n\") pod \"nova-cell1-cell-mapping-vxw6d\" (UID: \"b819ab60-7712-47d0-853b-4ae39eb770b1\") " pod="openstack/nova-cell1-cell-mapping-vxw6d" Dec 15 05:54:42 crc kubenswrapper[4747]: I1215 05:54:42.176894 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b819ab60-7712-47d0-853b-4ae39eb770b1-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-vxw6d\" (UID: \"b819ab60-7712-47d0-853b-4ae39eb770b1\") " pod="openstack/nova-cell1-cell-mapping-vxw6d" Dec 15 05:54:42 crc kubenswrapper[4747]: I1215 05:54:42.177024 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b819ab60-7712-47d0-853b-4ae39eb770b1-scripts\") pod \"nova-cell1-cell-mapping-vxw6d\" (UID: \"b819ab60-7712-47d0-853b-4ae39eb770b1\") " pod="openstack/nova-cell1-cell-mapping-vxw6d" Dec 15 05:54:42 crc kubenswrapper[4747]: I1215 05:54:42.177081 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b819ab60-7712-47d0-853b-4ae39eb770b1-config-data\") pod \"nova-cell1-cell-mapping-vxw6d\" (UID: \"b819ab60-7712-47d0-853b-4ae39eb770b1\") " pod="openstack/nova-cell1-cell-mapping-vxw6d" Dec 15 05:54:42 crc kubenswrapper[4747]: I1215 05:54:42.184013 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b819ab60-7712-47d0-853b-4ae39eb770b1-scripts\") pod \"nova-cell1-cell-mapping-vxw6d\" (UID: \"b819ab60-7712-47d0-853b-4ae39eb770b1\") " pod="openstack/nova-cell1-cell-mapping-vxw6d" Dec 15 05:54:42 crc kubenswrapper[4747]: I1215 05:54:42.184192 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b819ab60-7712-47d0-853b-4ae39eb770b1-config-data\") pod \"nova-cell1-cell-mapping-vxw6d\" (UID: \"b819ab60-7712-47d0-853b-4ae39eb770b1\") " pod="openstack/nova-cell1-cell-mapping-vxw6d" Dec 15 05:54:42 crc kubenswrapper[4747]: I1215 05:54:42.184866 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b819ab60-7712-47d0-853b-4ae39eb770b1-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-vxw6d\" (UID: \"b819ab60-7712-47d0-853b-4ae39eb770b1\") " pod="openstack/nova-cell1-cell-mapping-vxw6d" Dec 15 05:54:42 crc kubenswrapper[4747]: I1215 05:54:42.199841 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fk2n\" (UniqueName: \"kubernetes.io/projected/b819ab60-7712-47d0-853b-4ae39eb770b1-kube-api-access-8fk2n\") pod \"nova-cell1-cell-mapping-vxw6d\" (UID: \"b819ab60-7712-47d0-853b-4ae39eb770b1\") " pod="openstack/nova-cell1-cell-mapping-vxw6d" Dec 15 05:54:42 crc kubenswrapper[4747]: I1215 05:54:42.382350 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-vxw6d" Dec 15 05:54:42 crc kubenswrapper[4747]: I1215 05:54:42.818541 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-vxw6d"] Dec 15 05:54:42 crc kubenswrapper[4747]: I1215 05:54:42.908322 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-vxw6d" event={"ID":"b819ab60-7712-47d0-853b-4ae39eb770b1","Type":"ContainerStarted","Data":"c6a265d6ce604bf5952132140b88b36b6d0c7ece70a52d2bb912095feaba42fa"} Dec 15 05:54:43 crc kubenswrapper[4747]: I1215 05:54:43.920274 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-vxw6d" event={"ID":"b819ab60-7712-47d0-853b-4ae39eb770b1","Type":"ContainerStarted","Data":"4283f361cf5c820766617f90d7b7861bcb68066c0d666b32ab51c95055ada8b8"} Dec 15 05:54:43 crc kubenswrapper[4747]: I1215 05:54:43.946717 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-vxw6d" podStartSLOduration=1.946701707 podStartE2EDuration="1.946701707s" podCreationTimestamp="2025-12-15 05:54:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:54:43.94314519 +0000 UTC m=+1047.639657107" watchObservedRunningTime="2025-12-15 05:54:43.946701707 +0000 UTC m=+1047.643213624" Dec 15 05:54:44 crc kubenswrapper[4747]: I1215 05:54:44.123563 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 15 05:54:44 crc kubenswrapper[4747]: I1215 05:54:44.123624 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 15 05:54:45 crc kubenswrapper[4747]: I1215 05:54:45.206130 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7cb287d9-6fdc-4ecd-80aa-fff39ebd8679" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.186:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 15 05:54:45 crc kubenswrapper[4747]: I1215 05:54:45.206195 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7cb287d9-6fdc-4ecd-80aa-fff39ebd8679" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.186:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 15 05:54:45 crc kubenswrapper[4747]: I1215 05:54:45.215695 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 15 05:54:45 crc kubenswrapper[4747]: I1215 05:54:45.239978 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 15 05:54:45 crc kubenswrapper[4747]: I1215 05:54:45.961401 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 15 05:54:46 crc kubenswrapper[4747]: I1215 05:54:46.180685 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 15 05:54:46 crc kubenswrapper[4747]: I1215 05:54:46.181145 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 15 05:54:47 crc kubenswrapper[4747]: I1215 05:54:47.201090 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e335bdf2-e70d-4140-835b-d8071700c0f6" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.188:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 15 05:54:47 crc kubenswrapper[4747]: I1215 05:54:47.201180 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e335bdf2-e70d-4140-835b-d8071700c0f6" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.188:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 15 05:54:47 crc kubenswrapper[4747]: I1215 05:54:47.957752 4747 generic.go:334] "Generic (PLEG): container finished" podID="b819ab60-7712-47d0-853b-4ae39eb770b1" containerID="4283f361cf5c820766617f90d7b7861bcb68066c0d666b32ab51c95055ada8b8" exitCode=0 Dec 15 05:54:47 crc kubenswrapper[4747]: I1215 05:54:47.957812 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-vxw6d" event={"ID":"b819ab60-7712-47d0-853b-4ae39eb770b1","Type":"ContainerDied","Data":"4283f361cf5c820766617f90d7b7861bcb68066c0d666b32ab51c95055ada8b8"} Dec 15 05:54:48 crc kubenswrapper[4747]: I1215 05:54:48.631527 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 15 05:54:48 crc kubenswrapper[4747]: I1215 05:54:48.814059 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1afd3938-1da2-4c72-811d-fc9ec8f21171-combined-ca-bundle\") pod \"1afd3938-1da2-4c72-811d-fc9ec8f21171\" (UID: \"1afd3938-1da2-4c72-811d-fc9ec8f21171\") " Dec 15 05:54:48 crc kubenswrapper[4747]: I1215 05:54:48.814095 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1afd3938-1da2-4c72-811d-fc9ec8f21171-scripts\") pod \"1afd3938-1da2-4c72-811d-fc9ec8f21171\" (UID: \"1afd3938-1da2-4c72-811d-fc9ec8f21171\") " Dec 15 05:54:48 crc kubenswrapper[4747]: I1215 05:54:48.814142 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdzql\" (UniqueName: \"kubernetes.io/projected/1afd3938-1da2-4c72-811d-fc9ec8f21171-kube-api-access-tdzql\") pod \"1afd3938-1da2-4c72-811d-fc9ec8f21171\" (UID: \"1afd3938-1da2-4c72-811d-fc9ec8f21171\") " Dec 15 05:54:48 crc kubenswrapper[4747]: I1215 05:54:48.814192 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1afd3938-1da2-4c72-811d-fc9ec8f21171-sg-core-conf-yaml\") pod \"1afd3938-1da2-4c72-811d-fc9ec8f21171\" (UID: \"1afd3938-1da2-4c72-811d-fc9ec8f21171\") " Dec 15 05:54:48 crc kubenswrapper[4747]: I1215 05:54:48.814256 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1afd3938-1da2-4c72-811d-fc9ec8f21171-config-data\") pod \"1afd3938-1da2-4c72-811d-fc9ec8f21171\" (UID: \"1afd3938-1da2-4c72-811d-fc9ec8f21171\") " Dec 15 05:54:48 crc kubenswrapper[4747]: I1215 05:54:48.814279 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1afd3938-1da2-4c72-811d-fc9ec8f21171-run-httpd\") pod \"1afd3938-1da2-4c72-811d-fc9ec8f21171\" (UID: \"1afd3938-1da2-4c72-811d-fc9ec8f21171\") " Dec 15 05:54:48 crc kubenswrapper[4747]: I1215 05:54:48.814390 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1afd3938-1da2-4c72-811d-fc9ec8f21171-log-httpd\") pod \"1afd3938-1da2-4c72-811d-fc9ec8f21171\" (UID: \"1afd3938-1da2-4c72-811d-fc9ec8f21171\") " Dec 15 05:54:48 crc kubenswrapper[4747]: I1215 05:54:48.815023 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1afd3938-1da2-4c72-811d-fc9ec8f21171-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1afd3938-1da2-4c72-811d-fc9ec8f21171" (UID: "1afd3938-1da2-4c72-811d-fc9ec8f21171"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:54:48 crc kubenswrapper[4747]: I1215 05:54:48.815117 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1afd3938-1da2-4c72-811d-fc9ec8f21171-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1afd3938-1da2-4c72-811d-fc9ec8f21171" (UID: "1afd3938-1da2-4c72-811d-fc9ec8f21171"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:54:48 crc kubenswrapper[4747]: I1215 05:54:48.826567 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1afd3938-1da2-4c72-811d-fc9ec8f21171-kube-api-access-tdzql" (OuterVolumeSpecName: "kube-api-access-tdzql") pod "1afd3938-1da2-4c72-811d-fc9ec8f21171" (UID: "1afd3938-1da2-4c72-811d-fc9ec8f21171"). InnerVolumeSpecName "kube-api-access-tdzql". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:54:48 crc kubenswrapper[4747]: I1215 05:54:48.826845 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1afd3938-1da2-4c72-811d-fc9ec8f21171-scripts" (OuterVolumeSpecName: "scripts") pod "1afd3938-1da2-4c72-811d-fc9ec8f21171" (UID: "1afd3938-1da2-4c72-811d-fc9ec8f21171"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:54:48 crc kubenswrapper[4747]: I1215 05:54:48.841857 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1afd3938-1da2-4c72-811d-fc9ec8f21171-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1afd3938-1da2-4c72-811d-fc9ec8f21171" (UID: "1afd3938-1da2-4c72-811d-fc9ec8f21171"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:54:48 crc kubenswrapper[4747]: I1215 05:54:48.884970 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1afd3938-1da2-4c72-811d-fc9ec8f21171-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1afd3938-1da2-4c72-811d-fc9ec8f21171" (UID: "1afd3938-1da2-4c72-811d-fc9ec8f21171"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:54:48 crc kubenswrapper[4747]: I1215 05:54:48.888681 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1afd3938-1da2-4c72-811d-fc9ec8f21171-config-data" (OuterVolumeSpecName: "config-data") pod "1afd3938-1da2-4c72-811d-fc9ec8f21171" (UID: "1afd3938-1da2-4c72-811d-fc9ec8f21171"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:54:48 crc kubenswrapper[4747]: I1215 05:54:48.916600 4747 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1afd3938-1da2-4c72-811d-fc9ec8f21171-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:48 crc kubenswrapper[4747]: I1215 05:54:48.916641 4747 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1afd3938-1da2-4c72-811d-fc9ec8f21171-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:48 crc kubenswrapper[4747]: I1215 05:54:48.916655 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1afd3938-1da2-4c72-811d-fc9ec8f21171-scripts\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:48 crc kubenswrapper[4747]: I1215 05:54:48.916664 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdzql\" (UniqueName: \"kubernetes.io/projected/1afd3938-1da2-4c72-811d-fc9ec8f21171-kube-api-access-tdzql\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:48 crc kubenswrapper[4747]: I1215 05:54:48.916673 4747 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1afd3938-1da2-4c72-811d-fc9ec8f21171-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:48 crc kubenswrapper[4747]: I1215 05:54:48.916679 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1afd3938-1da2-4c72-811d-fc9ec8f21171-config-data\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:48 crc kubenswrapper[4747]: I1215 05:54:48.916687 4747 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1afd3938-1da2-4c72-811d-fc9ec8f21171-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:48 crc kubenswrapper[4747]: I1215 05:54:48.967941 4747 generic.go:334] "Generic (PLEG): container finished" podID="1afd3938-1da2-4c72-811d-fc9ec8f21171" containerID="36cfbd60223f11dd0e6dbbcecb56b0c80f0a9300545de3f3ec074131c3e00be6" exitCode=0 Dec 15 05:54:48 crc kubenswrapper[4747]: I1215 05:54:48.967965 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1afd3938-1da2-4c72-811d-fc9ec8f21171","Type":"ContainerDied","Data":"36cfbd60223f11dd0e6dbbcecb56b0c80f0a9300545de3f3ec074131c3e00be6"} Dec 15 05:54:48 crc kubenswrapper[4747]: I1215 05:54:48.968002 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 15 05:54:48 crc kubenswrapper[4747]: I1215 05:54:48.968018 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1afd3938-1da2-4c72-811d-fc9ec8f21171","Type":"ContainerDied","Data":"e7c5738912773c418168bbc3c00c197c2026354e8dc2fbbfbd9e63a52fb8e2f1"} Dec 15 05:54:48 crc kubenswrapper[4747]: I1215 05:54:48.968045 4747 scope.go:117] "RemoveContainer" containerID="b9d1deb8110d848600fee7fc0455b915cf4cffe26ae7bc3ce122843396715989" Dec 15 05:54:48 crc kubenswrapper[4747]: I1215 05:54:48.992532 4747 scope.go:117] "RemoveContainer" containerID="ed889bc211ce461cf1e85e83fd6d860227dd9d95280f19c1142a58866f4157ab" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.002371 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.016412 4747 scope.go:117] "RemoveContainer" containerID="36cfbd60223f11dd0e6dbbcecb56b0c80f0a9300545de3f3ec074131c3e00be6" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.022487 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.050722 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 15 05:54:49 crc kubenswrapper[4747]: E1215 05:54:49.051490 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1afd3938-1da2-4c72-811d-fc9ec8f21171" containerName="sg-core" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.051515 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="1afd3938-1da2-4c72-811d-fc9ec8f21171" containerName="sg-core" Dec 15 05:54:49 crc kubenswrapper[4747]: E1215 05:54:49.051550 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1afd3938-1da2-4c72-811d-fc9ec8f21171" containerName="ceilometer-central-agent" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.051560 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="1afd3938-1da2-4c72-811d-fc9ec8f21171" containerName="ceilometer-central-agent" Dec 15 05:54:49 crc kubenswrapper[4747]: E1215 05:54:49.051593 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1afd3938-1da2-4c72-811d-fc9ec8f21171" containerName="ceilometer-notification-agent" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.051605 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="1afd3938-1da2-4c72-811d-fc9ec8f21171" containerName="ceilometer-notification-agent" Dec 15 05:54:49 crc kubenswrapper[4747]: E1215 05:54:49.051624 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1afd3938-1da2-4c72-811d-fc9ec8f21171" containerName="proxy-httpd" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.051631 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="1afd3938-1da2-4c72-811d-fc9ec8f21171" containerName="proxy-httpd" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.052095 4747 scope.go:117] "RemoveContainer" containerID="a30fa25ec1ba0c2528ed748b740e0d33225cec181c2ae331cdf729efe943fbcf" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.052329 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="1afd3938-1da2-4c72-811d-fc9ec8f21171" containerName="sg-core" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.052348 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="1afd3938-1da2-4c72-811d-fc9ec8f21171" containerName="ceilometer-central-agent" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.052364 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="1afd3938-1da2-4c72-811d-fc9ec8f21171" containerName="ceilometer-notification-agent" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.052382 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="1afd3938-1da2-4c72-811d-fc9ec8f21171" containerName="proxy-httpd" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.055785 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.064531 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.064670 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.064953 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.086774 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.101044 4747 scope.go:117] "RemoveContainer" containerID="b9d1deb8110d848600fee7fc0455b915cf4cffe26ae7bc3ce122843396715989" Dec 15 05:54:49 crc kubenswrapper[4747]: E1215 05:54:49.112798 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9d1deb8110d848600fee7fc0455b915cf4cffe26ae7bc3ce122843396715989\": container with ID starting with b9d1deb8110d848600fee7fc0455b915cf4cffe26ae7bc3ce122843396715989 not found: ID does not exist" containerID="b9d1deb8110d848600fee7fc0455b915cf4cffe26ae7bc3ce122843396715989" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.112844 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9d1deb8110d848600fee7fc0455b915cf4cffe26ae7bc3ce122843396715989"} err="failed to get container status \"b9d1deb8110d848600fee7fc0455b915cf4cffe26ae7bc3ce122843396715989\": rpc error: code = NotFound desc = could not find container \"b9d1deb8110d848600fee7fc0455b915cf4cffe26ae7bc3ce122843396715989\": container with ID starting with b9d1deb8110d848600fee7fc0455b915cf4cffe26ae7bc3ce122843396715989 not found: ID does not exist" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.112873 4747 scope.go:117] "RemoveContainer" containerID="ed889bc211ce461cf1e85e83fd6d860227dd9d95280f19c1142a58866f4157ab" Dec 15 05:54:49 crc kubenswrapper[4747]: E1215 05:54:49.114088 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed889bc211ce461cf1e85e83fd6d860227dd9d95280f19c1142a58866f4157ab\": container with ID starting with ed889bc211ce461cf1e85e83fd6d860227dd9d95280f19c1142a58866f4157ab not found: ID does not exist" containerID="ed889bc211ce461cf1e85e83fd6d860227dd9d95280f19c1142a58866f4157ab" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.114222 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed889bc211ce461cf1e85e83fd6d860227dd9d95280f19c1142a58866f4157ab"} err="failed to get container status \"ed889bc211ce461cf1e85e83fd6d860227dd9d95280f19c1142a58866f4157ab\": rpc error: code = NotFound desc = could not find container \"ed889bc211ce461cf1e85e83fd6d860227dd9d95280f19c1142a58866f4157ab\": container with ID starting with ed889bc211ce461cf1e85e83fd6d860227dd9d95280f19c1142a58866f4157ab not found: ID does not exist" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.114337 4747 scope.go:117] "RemoveContainer" containerID="36cfbd60223f11dd0e6dbbcecb56b0c80f0a9300545de3f3ec074131c3e00be6" Dec 15 05:54:49 crc kubenswrapper[4747]: E1215 05:54:49.115001 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36cfbd60223f11dd0e6dbbcecb56b0c80f0a9300545de3f3ec074131c3e00be6\": container with ID starting with 36cfbd60223f11dd0e6dbbcecb56b0c80f0a9300545de3f3ec074131c3e00be6 not found: ID does not exist" containerID="36cfbd60223f11dd0e6dbbcecb56b0c80f0a9300545de3f3ec074131c3e00be6" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.115026 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36cfbd60223f11dd0e6dbbcecb56b0c80f0a9300545de3f3ec074131c3e00be6"} err="failed to get container status \"36cfbd60223f11dd0e6dbbcecb56b0c80f0a9300545de3f3ec074131c3e00be6\": rpc error: code = NotFound desc = could not find container \"36cfbd60223f11dd0e6dbbcecb56b0c80f0a9300545de3f3ec074131c3e00be6\": container with ID starting with 36cfbd60223f11dd0e6dbbcecb56b0c80f0a9300545de3f3ec074131c3e00be6 not found: ID does not exist" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.115048 4747 scope.go:117] "RemoveContainer" containerID="a30fa25ec1ba0c2528ed748b740e0d33225cec181c2ae331cdf729efe943fbcf" Dec 15 05:54:49 crc kubenswrapper[4747]: E1215 05:54:49.115313 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a30fa25ec1ba0c2528ed748b740e0d33225cec181c2ae331cdf729efe943fbcf\": container with ID starting with a30fa25ec1ba0c2528ed748b740e0d33225cec181c2ae331cdf729efe943fbcf not found: ID does not exist" containerID="a30fa25ec1ba0c2528ed748b740e0d33225cec181c2ae331cdf729efe943fbcf" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.115333 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a30fa25ec1ba0c2528ed748b740e0d33225cec181c2ae331cdf729efe943fbcf"} err="failed to get container status \"a30fa25ec1ba0c2528ed748b740e0d33225cec181c2ae331cdf729efe943fbcf\": rpc error: code = NotFound desc = could not find container \"a30fa25ec1ba0c2528ed748b740e0d33225cec181c2ae331cdf729efe943fbcf\": container with ID starting with a30fa25ec1ba0c2528ed748b740e0d33225cec181c2ae331cdf729efe943fbcf not found: ID does not exist" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.231271 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0942975b-f269-434f-8853-927e58959a1f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0942975b-f269-434f-8853-927e58959a1f\") " pod="openstack/ceilometer-0" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.232459 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0942975b-f269-434f-8853-927e58959a1f-config-data\") pod \"ceilometer-0\" (UID: \"0942975b-f269-434f-8853-927e58959a1f\") " pod="openstack/ceilometer-0" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.232575 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0942975b-f269-434f-8853-927e58959a1f-run-httpd\") pod \"ceilometer-0\" (UID: \"0942975b-f269-434f-8853-927e58959a1f\") " pod="openstack/ceilometer-0" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.232646 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0942975b-f269-434f-8853-927e58959a1f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0942975b-f269-434f-8853-927e58959a1f\") " pod="openstack/ceilometer-0" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.232684 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phtcs\" (UniqueName: \"kubernetes.io/projected/0942975b-f269-434f-8853-927e58959a1f-kube-api-access-phtcs\") pod \"ceilometer-0\" (UID: \"0942975b-f269-434f-8853-927e58959a1f\") " pod="openstack/ceilometer-0" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.232747 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0942975b-f269-434f-8853-927e58959a1f-scripts\") pod \"ceilometer-0\" (UID: \"0942975b-f269-434f-8853-927e58959a1f\") " pod="openstack/ceilometer-0" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.232792 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0942975b-f269-434f-8853-927e58959a1f-log-httpd\") pod \"ceilometer-0\" (UID: \"0942975b-f269-434f-8853-927e58959a1f\") " pod="openstack/ceilometer-0" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.232844 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0942975b-f269-434f-8853-927e58959a1f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0942975b-f269-434f-8853-927e58959a1f\") " pod="openstack/ceilometer-0" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.278986 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-vxw6d" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.335611 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phtcs\" (UniqueName: \"kubernetes.io/projected/0942975b-f269-434f-8853-927e58959a1f-kube-api-access-phtcs\") pod \"ceilometer-0\" (UID: \"0942975b-f269-434f-8853-927e58959a1f\") " pod="openstack/ceilometer-0" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.335716 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0942975b-f269-434f-8853-927e58959a1f-scripts\") pod \"ceilometer-0\" (UID: \"0942975b-f269-434f-8853-927e58959a1f\") " pod="openstack/ceilometer-0" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.335889 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0942975b-f269-434f-8853-927e58959a1f-log-httpd\") pod \"ceilometer-0\" (UID: \"0942975b-f269-434f-8853-927e58959a1f\") " pod="openstack/ceilometer-0" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.335986 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0942975b-f269-434f-8853-927e58959a1f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0942975b-f269-434f-8853-927e58959a1f\") " pod="openstack/ceilometer-0" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.336122 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0942975b-f269-434f-8853-927e58959a1f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0942975b-f269-434f-8853-927e58959a1f\") " pod="openstack/ceilometer-0" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.336173 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0942975b-f269-434f-8853-927e58959a1f-config-data\") pod \"ceilometer-0\" (UID: \"0942975b-f269-434f-8853-927e58959a1f\") " pod="openstack/ceilometer-0" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.336272 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0942975b-f269-434f-8853-927e58959a1f-run-httpd\") pod \"ceilometer-0\" (UID: \"0942975b-f269-434f-8853-927e58959a1f\") " pod="openstack/ceilometer-0" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.336375 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0942975b-f269-434f-8853-927e58959a1f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0942975b-f269-434f-8853-927e58959a1f\") " pod="openstack/ceilometer-0" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.337488 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0942975b-f269-434f-8853-927e58959a1f-log-httpd\") pod \"ceilometer-0\" (UID: \"0942975b-f269-434f-8853-927e58959a1f\") " pod="openstack/ceilometer-0" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.337586 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0942975b-f269-434f-8853-927e58959a1f-run-httpd\") pod \"ceilometer-0\" (UID: \"0942975b-f269-434f-8853-927e58959a1f\") " pod="openstack/ceilometer-0" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.343836 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0942975b-f269-434f-8853-927e58959a1f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0942975b-f269-434f-8853-927e58959a1f\") " pod="openstack/ceilometer-0" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.344647 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0942975b-f269-434f-8853-927e58959a1f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0942975b-f269-434f-8853-927e58959a1f\") " pod="openstack/ceilometer-0" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.344732 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0942975b-f269-434f-8853-927e58959a1f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0942975b-f269-434f-8853-927e58959a1f\") " pod="openstack/ceilometer-0" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.344796 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0942975b-f269-434f-8853-927e58959a1f-scripts\") pod \"ceilometer-0\" (UID: \"0942975b-f269-434f-8853-927e58959a1f\") " pod="openstack/ceilometer-0" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.352537 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0942975b-f269-434f-8853-927e58959a1f-config-data\") pod \"ceilometer-0\" (UID: \"0942975b-f269-434f-8853-927e58959a1f\") " pod="openstack/ceilometer-0" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.354631 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phtcs\" (UniqueName: \"kubernetes.io/projected/0942975b-f269-434f-8853-927e58959a1f-kube-api-access-phtcs\") pod \"ceilometer-0\" (UID: \"0942975b-f269-434f-8853-927e58959a1f\") " pod="openstack/ceilometer-0" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.390237 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.441426 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b819ab60-7712-47d0-853b-4ae39eb770b1-config-data\") pod \"b819ab60-7712-47d0-853b-4ae39eb770b1\" (UID: \"b819ab60-7712-47d0-853b-4ae39eb770b1\") " Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.441715 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fk2n\" (UniqueName: \"kubernetes.io/projected/b819ab60-7712-47d0-853b-4ae39eb770b1-kube-api-access-8fk2n\") pod \"b819ab60-7712-47d0-853b-4ae39eb770b1\" (UID: \"b819ab60-7712-47d0-853b-4ae39eb770b1\") " Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.441806 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b819ab60-7712-47d0-853b-4ae39eb770b1-scripts\") pod \"b819ab60-7712-47d0-853b-4ae39eb770b1\" (UID: \"b819ab60-7712-47d0-853b-4ae39eb770b1\") " Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.441835 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b819ab60-7712-47d0-853b-4ae39eb770b1-combined-ca-bundle\") pod \"b819ab60-7712-47d0-853b-4ae39eb770b1\" (UID: \"b819ab60-7712-47d0-853b-4ae39eb770b1\") " Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.448283 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b819ab60-7712-47d0-853b-4ae39eb770b1-scripts" (OuterVolumeSpecName: "scripts") pod "b819ab60-7712-47d0-853b-4ae39eb770b1" (UID: "b819ab60-7712-47d0-853b-4ae39eb770b1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.448404 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b819ab60-7712-47d0-853b-4ae39eb770b1-kube-api-access-8fk2n" (OuterVolumeSpecName: "kube-api-access-8fk2n") pod "b819ab60-7712-47d0-853b-4ae39eb770b1" (UID: "b819ab60-7712-47d0-853b-4ae39eb770b1"). InnerVolumeSpecName "kube-api-access-8fk2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.466124 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b819ab60-7712-47d0-853b-4ae39eb770b1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b819ab60-7712-47d0-853b-4ae39eb770b1" (UID: "b819ab60-7712-47d0-853b-4ae39eb770b1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.466477 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b819ab60-7712-47d0-853b-4ae39eb770b1-config-data" (OuterVolumeSpecName: "config-data") pod "b819ab60-7712-47d0-853b-4ae39eb770b1" (UID: "b819ab60-7712-47d0-853b-4ae39eb770b1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.544305 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b819ab60-7712-47d0-853b-4ae39eb770b1-config-data\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.544483 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fk2n\" (UniqueName: \"kubernetes.io/projected/b819ab60-7712-47d0-853b-4ae39eb770b1-kube-api-access-8fk2n\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.544495 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b819ab60-7712-47d0-853b-4ae39eb770b1-scripts\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.544503 4747 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b819ab60-7712-47d0-853b-4ae39eb770b1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.839347 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.982289 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-vxw6d" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.982321 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-vxw6d" event={"ID":"b819ab60-7712-47d0-853b-4ae39eb770b1","Type":"ContainerDied","Data":"c6a265d6ce604bf5952132140b88b36b6d0c7ece70a52d2bb912095feaba42fa"} Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.982951 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6a265d6ce604bf5952132140b88b36b6d0c7ece70a52d2bb912095feaba42fa" Dec 15 05:54:49 crc kubenswrapper[4747]: I1215 05:54:49.983611 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0942975b-f269-434f-8853-927e58959a1f","Type":"ContainerStarted","Data":"f937e0c6bb2adac5b06b7a66baae5722c1acb2dabb0ad0e27ed954143a0be37c"} Dec 15 05:54:50 crc kubenswrapper[4747]: I1215 05:54:50.159455 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 15 05:54:50 crc kubenswrapper[4747]: I1215 05:54:50.159760 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7cb287d9-6fdc-4ecd-80aa-fff39ebd8679" containerName="nova-api-log" containerID="cri-o://debb15f0daf261dd60168c746ccc67c94ea7902c78f54185ae090f32261cfcfe" gracePeriod=30 Dec 15 05:54:50 crc kubenswrapper[4747]: I1215 05:54:50.159824 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7cb287d9-6fdc-4ecd-80aa-fff39ebd8679" containerName="nova-api-api" containerID="cri-o://ba6633aba93f8cebbb1780644b8692676db07b155bbb4603183556d3653c8737" gracePeriod=30 Dec 15 05:54:50 crc kubenswrapper[4747]: I1215 05:54:50.169067 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 15 05:54:50 crc kubenswrapper[4747]: I1215 05:54:50.169256 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="84d03e88-8521-4da8-9d5b-28185ed47abc" containerName="nova-scheduler-scheduler" containerID="cri-o://dc26235c0c1c670c3bd6d437305a8521cc2ad981febbb96ad43cc43801fc49cf" gracePeriod=30 Dec 15 05:54:50 crc kubenswrapper[4747]: I1215 05:54:50.177060 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 15 05:54:50 crc kubenswrapper[4747]: I1215 05:54:50.182144 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e335bdf2-e70d-4140-835b-d8071700c0f6" containerName="nova-metadata-metadata" containerID="cri-o://213ef4370571a38ac03e4905dce468332d87713c7789e9d62290d84bc8b62135" gracePeriod=30 Dec 15 05:54:50 crc kubenswrapper[4747]: I1215 05:54:50.182303 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e335bdf2-e70d-4140-835b-d8071700c0f6" containerName="nova-metadata-log" containerID="cri-o://be188d4f2d55417a5e25ac92abbb09ed50adc4bff835f4e9b58e7a4bbe8349fd" gracePeriod=30 Dec 15 05:54:50 crc kubenswrapper[4747]: E1215 05:54:50.218194 4747 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dc26235c0c1c670c3bd6d437305a8521cc2ad981febbb96ad43cc43801fc49cf" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 15 05:54:50 crc kubenswrapper[4747]: E1215 05:54:50.219760 4747 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dc26235c0c1c670c3bd6d437305a8521cc2ad981febbb96ad43cc43801fc49cf" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 15 05:54:50 crc kubenswrapper[4747]: E1215 05:54:50.221718 4747 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dc26235c0c1c670c3bd6d437305a8521cc2ad981febbb96ad43cc43801fc49cf" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 15 05:54:50 crc kubenswrapper[4747]: E1215 05:54:50.221760 4747 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="84d03e88-8521-4da8-9d5b-28185ed47abc" containerName="nova-scheduler-scheduler" Dec 15 05:54:50 crc kubenswrapper[4747]: I1215 05:54:50.225874 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 15 05:54:50 crc kubenswrapper[4747]: I1215 05:54:50.641516 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1afd3938-1da2-4c72-811d-fc9ec8f21171" path="/var/lib/kubelet/pods/1afd3938-1da2-4c72-811d-fc9ec8f21171/volumes" Dec 15 05:54:50 crc kubenswrapper[4747]: I1215 05:54:50.996285 4747 generic.go:334] "Generic (PLEG): container finished" podID="7cb287d9-6fdc-4ecd-80aa-fff39ebd8679" containerID="debb15f0daf261dd60168c746ccc67c94ea7902c78f54185ae090f32261cfcfe" exitCode=143 Dec 15 05:54:50 crc kubenswrapper[4747]: I1215 05:54:50.996392 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7cb287d9-6fdc-4ecd-80aa-fff39ebd8679","Type":"ContainerDied","Data":"debb15f0daf261dd60168c746ccc67c94ea7902c78f54185ae090f32261cfcfe"} Dec 15 05:54:50 crc kubenswrapper[4747]: I1215 05:54:50.998433 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0942975b-f269-434f-8853-927e58959a1f","Type":"ContainerStarted","Data":"b0b41407554d459aaff773e093368964bd37cb17a1be5cc34db0a9d94e6d0046"} Dec 15 05:54:51 crc kubenswrapper[4747]: I1215 05:54:51.000707 4747 generic.go:334] "Generic (PLEG): container finished" podID="e335bdf2-e70d-4140-835b-d8071700c0f6" containerID="be188d4f2d55417a5e25ac92abbb09ed50adc4bff835f4e9b58e7a4bbe8349fd" exitCode=143 Dec 15 05:54:51 crc kubenswrapper[4747]: I1215 05:54:51.000779 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e335bdf2-e70d-4140-835b-d8071700c0f6","Type":"ContainerDied","Data":"be188d4f2d55417a5e25ac92abbb09ed50adc4bff835f4e9b58e7a4bbe8349fd"} Dec 15 05:54:52 crc kubenswrapper[4747]: I1215 05:54:52.010842 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0942975b-f269-434f-8853-927e58959a1f","Type":"ContainerStarted","Data":"c448c98e5895b0f8e6c9ee467b1cc9db2f852a18b074edbc14b08f8836c89185"} Dec 15 05:54:52 crc kubenswrapper[4747]: I1215 05:54:52.012273 4747 generic.go:334] "Generic (PLEG): container finished" podID="84d03e88-8521-4da8-9d5b-28185ed47abc" containerID="dc26235c0c1c670c3bd6d437305a8521cc2ad981febbb96ad43cc43801fc49cf" exitCode=0 Dec 15 05:54:52 crc kubenswrapper[4747]: I1215 05:54:52.012320 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"84d03e88-8521-4da8-9d5b-28185ed47abc","Type":"ContainerDied","Data":"dc26235c0c1c670c3bd6d437305a8521cc2ad981febbb96ad43cc43801fc49cf"} Dec 15 05:54:52 crc kubenswrapper[4747]: I1215 05:54:52.141917 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 15 05:54:52 crc kubenswrapper[4747]: I1215 05:54:52.221776 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdjl6\" (UniqueName: \"kubernetes.io/projected/84d03e88-8521-4da8-9d5b-28185ed47abc-kube-api-access-kdjl6\") pod \"84d03e88-8521-4da8-9d5b-28185ed47abc\" (UID: \"84d03e88-8521-4da8-9d5b-28185ed47abc\") " Dec 15 05:54:52 crc kubenswrapper[4747]: I1215 05:54:52.221973 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84d03e88-8521-4da8-9d5b-28185ed47abc-config-data\") pod \"84d03e88-8521-4da8-9d5b-28185ed47abc\" (UID: \"84d03e88-8521-4da8-9d5b-28185ed47abc\") " Dec 15 05:54:52 crc kubenswrapper[4747]: I1215 05:54:52.222143 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84d03e88-8521-4da8-9d5b-28185ed47abc-combined-ca-bundle\") pod \"84d03e88-8521-4da8-9d5b-28185ed47abc\" (UID: \"84d03e88-8521-4da8-9d5b-28185ed47abc\") " Dec 15 05:54:52 crc kubenswrapper[4747]: I1215 05:54:52.227395 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84d03e88-8521-4da8-9d5b-28185ed47abc-kube-api-access-kdjl6" (OuterVolumeSpecName: "kube-api-access-kdjl6") pod "84d03e88-8521-4da8-9d5b-28185ed47abc" (UID: "84d03e88-8521-4da8-9d5b-28185ed47abc"). InnerVolumeSpecName "kube-api-access-kdjl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:54:52 crc kubenswrapper[4747]: E1215 05:54:52.243250 4747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84d03e88-8521-4da8-9d5b-28185ed47abc-config-data podName:84d03e88-8521-4da8-9d5b-28185ed47abc nodeName:}" failed. No retries permitted until 2025-12-15 05:54:52.743211881 +0000 UTC m=+1056.439723798 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/84d03e88-8521-4da8-9d5b-28185ed47abc-config-data") pod "84d03e88-8521-4da8-9d5b-28185ed47abc" (UID: "84d03e88-8521-4da8-9d5b-28185ed47abc") : error deleting /var/lib/kubelet/pods/84d03e88-8521-4da8-9d5b-28185ed47abc/volume-subpaths: remove /var/lib/kubelet/pods/84d03e88-8521-4da8-9d5b-28185ed47abc/volume-subpaths: no such file or directory Dec 15 05:54:52 crc kubenswrapper[4747]: I1215 05:54:52.245302 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84d03e88-8521-4da8-9d5b-28185ed47abc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "84d03e88-8521-4da8-9d5b-28185ed47abc" (UID: "84d03e88-8521-4da8-9d5b-28185ed47abc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:54:52 crc kubenswrapper[4747]: I1215 05:54:52.323540 4747 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84d03e88-8521-4da8-9d5b-28185ed47abc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:52 crc kubenswrapper[4747]: I1215 05:54:52.323574 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdjl6\" (UniqueName: \"kubernetes.io/projected/84d03e88-8521-4da8-9d5b-28185ed47abc-kube-api-access-kdjl6\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:52 crc kubenswrapper[4747]: I1215 05:54:52.833747 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84d03e88-8521-4da8-9d5b-28185ed47abc-config-data\") pod \"84d03e88-8521-4da8-9d5b-28185ed47abc\" (UID: \"84d03e88-8521-4da8-9d5b-28185ed47abc\") " Dec 15 05:54:52 crc kubenswrapper[4747]: I1215 05:54:52.853685 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84d03e88-8521-4da8-9d5b-28185ed47abc-config-data" (OuterVolumeSpecName: "config-data") pod "84d03e88-8521-4da8-9d5b-28185ed47abc" (UID: "84d03e88-8521-4da8-9d5b-28185ed47abc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:54:52 crc kubenswrapper[4747]: I1215 05:54:52.936731 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84d03e88-8521-4da8-9d5b-28185ed47abc-config-data\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.026138 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"84d03e88-8521-4da8-9d5b-28185ed47abc","Type":"ContainerDied","Data":"87115b9605f48b73ccb47da85f0b108ea75dab5bfe03d99002f8cb116089cad8"} Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.026243 4747 scope.go:117] "RemoveContainer" containerID="dc26235c0c1c670c3bd6d437305a8521cc2ad981febbb96ad43cc43801fc49cf" Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.026173 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.029235 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0942975b-f269-434f-8853-927e58959a1f","Type":"ContainerStarted","Data":"f69b62137d4a382a55917c62c95a558a7b68ab082637a743719953bef43ef398"} Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.060778 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.075615 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.082892 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 15 05:54:53 crc kubenswrapper[4747]: E1215 05:54:53.083374 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b819ab60-7712-47d0-853b-4ae39eb770b1" containerName="nova-manage" Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.083397 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="b819ab60-7712-47d0-853b-4ae39eb770b1" containerName="nova-manage" Dec 15 05:54:53 crc kubenswrapper[4747]: E1215 05:54:53.083416 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84d03e88-8521-4da8-9d5b-28185ed47abc" containerName="nova-scheduler-scheduler" Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.083423 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="84d03e88-8521-4da8-9d5b-28185ed47abc" containerName="nova-scheduler-scheduler" Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.083607 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="84d03e88-8521-4da8-9d5b-28185ed47abc" containerName="nova-scheduler-scheduler" Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.083638 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="b819ab60-7712-47d0-853b-4ae39eb770b1" containerName="nova-manage" Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.084306 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.086543 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.090588 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.247470 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwdgc\" (UniqueName: \"kubernetes.io/projected/13aa9deb-71b7-4adf-858c-89c461427547-kube-api-access-gwdgc\") pod \"nova-scheduler-0\" (UID: \"13aa9deb-71b7-4adf-858c-89c461427547\") " pod="openstack/nova-scheduler-0" Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.247674 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13aa9deb-71b7-4adf-858c-89c461427547-config-data\") pod \"nova-scheduler-0\" (UID: \"13aa9deb-71b7-4adf-858c-89c461427547\") " pod="openstack/nova-scheduler-0" Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.247759 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13aa9deb-71b7-4adf-858c-89c461427547-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"13aa9deb-71b7-4adf-858c-89c461427547\") " pod="openstack/nova-scheduler-0" Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.350374 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwdgc\" (UniqueName: \"kubernetes.io/projected/13aa9deb-71b7-4adf-858c-89c461427547-kube-api-access-gwdgc\") pod \"nova-scheduler-0\" (UID: \"13aa9deb-71b7-4adf-858c-89c461427547\") " pod="openstack/nova-scheduler-0" Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.350502 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13aa9deb-71b7-4adf-858c-89c461427547-config-data\") pod \"nova-scheduler-0\" (UID: \"13aa9deb-71b7-4adf-858c-89c461427547\") " pod="openstack/nova-scheduler-0" Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.350578 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13aa9deb-71b7-4adf-858c-89c461427547-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"13aa9deb-71b7-4adf-858c-89c461427547\") " pod="openstack/nova-scheduler-0" Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.356973 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13aa9deb-71b7-4adf-858c-89c461427547-config-data\") pod \"nova-scheduler-0\" (UID: \"13aa9deb-71b7-4adf-858c-89c461427547\") " pod="openstack/nova-scheduler-0" Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.365311 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwdgc\" (UniqueName: \"kubernetes.io/projected/13aa9deb-71b7-4adf-858c-89c461427547-kube-api-access-gwdgc\") pod \"nova-scheduler-0\" (UID: \"13aa9deb-71b7-4adf-858c-89c461427547\") " pod="openstack/nova-scheduler-0" Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.372430 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13aa9deb-71b7-4adf-858c-89c461427547-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"13aa9deb-71b7-4adf-858c-89c461427547\") " pod="openstack/nova-scheduler-0" Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.410370 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.739408 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.752118 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.865099 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cb287d9-6fdc-4ecd-80aa-fff39ebd8679-config-data\") pod \"7cb287d9-6fdc-4ecd-80aa-fff39ebd8679\" (UID: \"7cb287d9-6fdc-4ecd-80aa-fff39ebd8679\") " Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.865145 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e335bdf2-e70d-4140-835b-d8071700c0f6-nova-metadata-tls-certs\") pod \"e335bdf2-e70d-4140-835b-d8071700c0f6\" (UID: \"e335bdf2-e70d-4140-835b-d8071700c0f6\") " Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.865186 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e335bdf2-e70d-4140-835b-d8071700c0f6-logs\") pod \"e335bdf2-e70d-4140-835b-d8071700c0f6\" (UID: \"e335bdf2-e70d-4140-835b-d8071700c0f6\") " Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.865245 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e335bdf2-e70d-4140-835b-d8071700c0f6-config-data\") pod \"e335bdf2-e70d-4140-835b-d8071700c0f6\" (UID: \"e335bdf2-e70d-4140-835b-d8071700c0f6\") " Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.865310 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqpfc\" (UniqueName: \"kubernetes.io/projected/e335bdf2-e70d-4140-835b-d8071700c0f6-kube-api-access-bqpfc\") pod \"e335bdf2-e70d-4140-835b-d8071700c0f6\" (UID: \"e335bdf2-e70d-4140-835b-d8071700c0f6\") " Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.865335 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cb287d9-6fdc-4ecd-80aa-fff39ebd8679-logs\") pod \"7cb287d9-6fdc-4ecd-80aa-fff39ebd8679\" (UID: \"7cb287d9-6fdc-4ecd-80aa-fff39ebd8679\") " Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.865355 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e335bdf2-e70d-4140-835b-d8071700c0f6-combined-ca-bundle\") pod \"e335bdf2-e70d-4140-835b-d8071700c0f6\" (UID: \"e335bdf2-e70d-4140-835b-d8071700c0f6\") " Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.865390 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cb287d9-6fdc-4ecd-80aa-fff39ebd8679-combined-ca-bundle\") pod \"7cb287d9-6fdc-4ecd-80aa-fff39ebd8679\" (UID: \"7cb287d9-6fdc-4ecd-80aa-fff39ebd8679\") " Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.865418 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npxxg\" (UniqueName: \"kubernetes.io/projected/7cb287d9-6fdc-4ecd-80aa-fff39ebd8679-kube-api-access-npxxg\") pod \"7cb287d9-6fdc-4ecd-80aa-fff39ebd8679\" (UID: \"7cb287d9-6fdc-4ecd-80aa-fff39ebd8679\") " Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.865635 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e335bdf2-e70d-4140-835b-d8071700c0f6-logs" (OuterVolumeSpecName: "logs") pod "e335bdf2-e70d-4140-835b-d8071700c0f6" (UID: "e335bdf2-e70d-4140-835b-d8071700c0f6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.865838 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cb287d9-6fdc-4ecd-80aa-fff39ebd8679-logs" (OuterVolumeSpecName: "logs") pod "7cb287d9-6fdc-4ecd-80aa-fff39ebd8679" (UID: "7cb287d9-6fdc-4ecd-80aa-fff39ebd8679"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.865954 4747 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e335bdf2-e70d-4140-835b-d8071700c0f6-logs\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.870147 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cb287d9-6fdc-4ecd-80aa-fff39ebd8679-kube-api-access-npxxg" (OuterVolumeSpecName: "kube-api-access-npxxg") pod "7cb287d9-6fdc-4ecd-80aa-fff39ebd8679" (UID: "7cb287d9-6fdc-4ecd-80aa-fff39ebd8679"). InnerVolumeSpecName "kube-api-access-npxxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.870298 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e335bdf2-e70d-4140-835b-d8071700c0f6-kube-api-access-bqpfc" (OuterVolumeSpecName: "kube-api-access-bqpfc") pod "e335bdf2-e70d-4140-835b-d8071700c0f6" (UID: "e335bdf2-e70d-4140-835b-d8071700c0f6"). InnerVolumeSpecName "kube-api-access-bqpfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.892672 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cb287d9-6fdc-4ecd-80aa-fff39ebd8679-config-data" (OuterVolumeSpecName: "config-data") pod "7cb287d9-6fdc-4ecd-80aa-fff39ebd8679" (UID: "7cb287d9-6fdc-4ecd-80aa-fff39ebd8679"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.894864 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e335bdf2-e70d-4140-835b-d8071700c0f6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e335bdf2-e70d-4140-835b-d8071700c0f6" (UID: "e335bdf2-e70d-4140-835b-d8071700c0f6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.897210 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e335bdf2-e70d-4140-835b-d8071700c0f6-config-data" (OuterVolumeSpecName: "config-data") pod "e335bdf2-e70d-4140-835b-d8071700c0f6" (UID: "e335bdf2-e70d-4140-835b-d8071700c0f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.898410 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cb287d9-6fdc-4ecd-80aa-fff39ebd8679-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7cb287d9-6fdc-4ecd-80aa-fff39ebd8679" (UID: "7cb287d9-6fdc-4ecd-80aa-fff39ebd8679"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.903807 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.917211 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e335bdf2-e70d-4140-835b-d8071700c0f6-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "e335bdf2-e70d-4140-835b-d8071700c0f6" (UID: "e335bdf2-e70d-4140-835b-d8071700c0f6"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.968447 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cb287d9-6fdc-4ecd-80aa-fff39ebd8679-config-data\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.968481 4747 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e335bdf2-e70d-4140-835b-d8071700c0f6-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.968494 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e335bdf2-e70d-4140-835b-d8071700c0f6-config-data\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.968508 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqpfc\" (UniqueName: \"kubernetes.io/projected/e335bdf2-e70d-4140-835b-d8071700c0f6-kube-api-access-bqpfc\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.968518 4747 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cb287d9-6fdc-4ecd-80aa-fff39ebd8679-logs\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.968529 4747 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e335bdf2-e70d-4140-835b-d8071700c0f6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.968539 4747 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cb287d9-6fdc-4ecd-80aa-fff39ebd8679-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:53 crc kubenswrapper[4747]: I1215 05:54:53.968549 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npxxg\" (UniqueName: \"kubernetes.io/projected/7cb287d9-6fdc-4ecd-80aa-fff39ebd8679-kube-api-access-npxxg\") on node \"crc\" DevicePath \"\"" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.043305 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"13aa9deb-71b7-4adf-858c-89c461427547","Type":"ContainerStarted","Data":"1ca505a57ad8166d7c5628846ca1f132c833db20860c3aa02f7b108fa7eb2d4c"} Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.049281 4747 generic.go:334] "Generic (PLEG): container finished" podID="7cb287d9-6fdc-4ecd-80aa-fff39ebd8679" containerID="ba6633aba93f8cebbb1780644b8692676db07b155bbb4603183556d3653c8737" exitCode=0 Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.049357 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.049518 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7cb287d9-6fdc-4ecd-80aa-fff39ebd8679","Type":"ContainerDied","Data":"ba6633aba93f8cebbb1780644b8692676db07b155bbb4603183556d3653c8737"} Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.049624 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7cb287d9-6fdc-4ecd-80aa-fff39ebd8679","Type":"ContainerDied","Data":"3838102619e02d9a8b656d1654c863d9ebad0a26554d767c58f7a17ec0424eac"} Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.049696 4747 scope.go:117] "RemoveContainer" containerID="ba6633aba93f8cebbb1780644b8692676db07b155bbb4603183556d3653c8737" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.051782 4747 generic.go:334] "Generic (PLEG): container finished" podID="e335bdf2-e70d-4140-835b-d8071700c0f6" containerID="213ef4370571a38ac03e4905dce468332d87713c7789e9d62290d84bc8b62135" exitCode=0 Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.051817 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e335bdf2-e70d-4140-835b-d8071700c0f6","Type":"ContainerDied","Data":"213ef4370571a38ac03e4905dce468332d87713c7789e9d62290d84bc8b62135"} Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.051845 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e335bdf2-e70d-4140-835b-d8071700c0f6","Type":"ContainerDied","Data":"05ff72bb97849d974e90976380dac3e60e5c8bae881084e3847e644b2bada91a"} Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.051906 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.067592 4747 scope.go:117] "RemoveContainer" containerID="debb15f0daf261dd60168c746ccc67c94ea7902c78f54185ae090f32261cfcfe" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.086548 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.127376 4747 scope.go:117] "RemoveContainer" containerID="ba6633aba93f8cebbb1780644b8692676db07b155bbb4603183556d3653c8737" Dec 15 05:54:54 crc kubenswrapper[4747]: E1215 05:54:54.128258 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba6633aba93f8cebbb1780644b8692676db07b155bbb4603183556d3653c8737\": container with ID starting with ba6633aba93f8cebbb1780644b8692676db07b155bbb4603183556d3653c8737 not found: ID does not exist" containerID="ba6633aba93f8cebbb1780644b8692676db07b155bbb4603183556d3653c8737" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.128325 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba6633aba93f8cebbb1780644b8692676db07b155bbb4603183556d3653c8737"} err="failed to get container status \"ba6633aba93f8cebbb1780644b8692676db07b155bbb4603183556d3653c8737\": rpc error: code = NotFound desc = could not find container \"ba6633aba93f8cebbb1780644b8692676db07b155bbb4603183556d3653c8737\": container with ID starting with ba6633aba93f8cebbb1780644b8692676db07b155bbb4603183556d3653c8737 not found: ID does not exist" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.128356 4747 scope.go:117] "RemoveContainer" containerID="debb15f0daf261dd60168c746ccc67c94ea7902c78f54185ae090f32261cfcfe" Dec 15 05:54:54 crc kubenswrapper[4747]: E1215 05:54:54.130184 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"debb15f0daf261dd60168c746ccc67c94ea7902c78f54185ae090f32261cfcfe\": container with ID starting with debb15f0daf261dd60168c746ccc67c94ea7902c78f54185ae090f32261cfcfe not found: ID does not exist" containerID="debb15f0daf261dd60168c746ccc67c94ea7902c78f54185ae090f32261cfcfe" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.130211 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"debb15f0daf261dd60168c746ccc67c94ea7902c78f54185ae090f32261cfcfe"} err="failed to get container status \"debb15f0daf261dd60168c746ccc67c94ea7902c78f54185ae090f32261cfcfe\": rpc error: code = NotFound desc = could not find container \"debb15f0daf261dd60168c746ccc67c94ea7902c78f54185ae090f32261cfcfe\": container with ID starting with debb15f0daf261dd60168c746ccc67c94ea7902c78f54185ae090f32261cfcfe not found: ID does not exist" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.130236 4747 scope.go:117] "RemoveContainer" containerID="213ef4370571a38ac03e4905dce468332d87713c7789e9d62290d84bc8b62135" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.150271 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.168190 4747 scope.go:117] "RemoveContainer" containerID="be188d4f2d55417a5e25ac92abbb09ed50adc4bff835f4e9b58e7a4bbe8349fd" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.173561 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.182434 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.188370 4747 scope.go:117] "RemoveContainer" containerID="213ef4370571a38ac03e4905dce468332d87713c7789e9d62290d84bc8b62135" Dec 15 05:54:54 crc kubenswrapper[4747]: E1215 05:54:54.189078 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"213ef4370571a38ac03e4905dce468332d87713c7789e9d62290d84bc8b62135\": container with ID starting with 213ef4370571a38ac03e4905dce468332d87713c7789e9d62290d84bc8b62135 not found: ID does not exist" containerID="213ef4370571a38ac03e4905dce468332d87713c7789e9d62290d84bc8b62135" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.189122 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"213ef4370571a38ac03e4905dce468332d87713c7789e9d62290d84bc8b62135"} err="failed to get container status \"213ef4370571a38ac03e4905dce468332d87713c7789e9d62290d84bc8b62135\": rpc error: code = NotFound desc = could not find container \"213ef4370571a38ac03e4905dce468332d87713c7789e9d62290d84bc8b62135\": container with ID starting with 213ef4370571a38ac03e4905dce468332d87713c7789e9d62290d84bc8b62135 not found: ID does not exist" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.189152 4747 scope.go:117] "RemoveContainer" containerID="be188d4f2d55417a5e25ac92abbb09ed50adc4bff835f4e9b58e7a4bbe8349fd" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.191081 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 15 05:54:54 crc kubenswrapper[4747]: E1215 05:54:54.191855 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e335bdf2-e70d-4140-835b-d8071700c0f6" containerName="nova-metadata-metadata" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.191955 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="e335bdf2-e70d-4140-835b-d8071700c0f6" containerName="nova-metadata-metadata" Dec 15 05:54:54 crc kubenswrapper[4747]: E1215 05:54:54.192036 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be188d4f2d55417a5e25ac92abbb09ed50adc4bff835f4e9b58e7a4bbe8349fd\": container with ID starting with be188d4f2d55417a5e25ac92abbb09ed50adc4bff835f4e9b58e7a4bbe8349fd not found: ID does not exist" containerID="be188d4f2d55417a5e25ac92abbb09ed50adc4bff835f4e9b58e7a4bbe8349fd" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.192068 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be188d4f2d55417a5e25ac92abbb09ed50adc4bff835f4e9b58e7a4bbe8349fd"} err="failed to get container status \"be188d4f2d55417a5e25ac92abbb09ed50adc4bff835f4e9b58e7a4bbe8349fd\": rpc error: code = NotFound desc = could not find container \"be188d4f2d55417a5e25ac92abbb09ed50adc4bff835f4e9b58e7a4bbe8349fd\": container with ID starting with be188d4f2d55417a5e25ac92abbb09ed50adc4bff835f4e9b58e7a4bbe8349fd not found: ID does not exist" Dec 15 05:54:54 crc kubenswrapper[4747]: E1215 05:54:54.192050 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e335bdf2-e70d-4140-835b-d8071700c0f6" containerName="nova-metadata-log" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.192094 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="e335bdf2-e70d-4140-835b-d8071700c0f6" containerName="nova-metadata-log" Dec 15 05:54:54 crc kubenswrapper[4747]: E1215 05:54:54.192181 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cb287d9-6fdc-4ecd-80aa-fff39ebd8679" containerName="nova-api-log" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.192190 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cb287d9-6fdc-4ecd-80aa-fff39ebd8679" containerName="nova-api-log" Dec 15 05:54:54 crc kubenswrapper[4747]: E1215 05:54:54.192203 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cb287d9-6fdc-4ecd-80aa-fff39ebd8679" containerName="nova-api-api" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.192209 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cb287d9-6fdc-4ecd-80aa-fff39ebd8679" containerName="nova-api-api" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.192551 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cb287d9-6fdc-4ecd-80aa-fff39ebd8679" containerName="nova-api-api" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.192573 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="e335bdf2-e70d-4140-835b-d8071700c0f6" containerName="nova-metadata-metadata" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.192595 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cb287d9-6fdc-4ecd-80aa-fff39ebd8679" containerName="nova-api-log" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.192609 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="e335bdf2-e70d-4140-835b-d8071700c0f6" containerName="nova-metadata-log" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.193829 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.195765 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.198677 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.200116 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.201546 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.201748 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.210582 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.215653 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.376487 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e45a17a8-29f1-40e2-96ae-f2db0b32407e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e45a17a8-29f1-40e2-96ae-f2db0b32407e\") " pod="openstack/nova-metadata-0" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.376555 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6ks8\" (UniqueName: \"kubernetes.io/projected/070ff6d1-3ab4-4212-80c9-82fd3107dcd6-kube-api-access-c6ks8\") pod \"nova-api-0\" (UID: \"070ff6d1-3ab4-4212-80c9-82fd3107dcd6\") " pod="openstack/nova-api-0" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.376592 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/070ff6d1-3ab4-4212-80c9-82fd3107dcd6-logs\") pod \"nova-api-0\" (UID: \"070ff6d1-3ab4-4212-80c9-82fd3107dcd6\") " pod="openstack/nova-api-0" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.376644 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd9p4\" (UniqueName: \"kubernetes.io/projected/e45a17a8-29f1-40e2-96ae-f2db0b32407e-kube-api-access-zd9p4\") pod \"nova-metadata-0\" (UID: \"e45a17a8-29f1-40e2-96ae-f2db0b32407e\") " pod="openstack/nova-metadata-0" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.376708 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e45a17a8-29f1-40e2-96ae-f2db0b32407e-logs\") pod \"nova-metadata-0\" (UID: \"e45a17a8-29f1-40e2-96ae-f2db0b32407e\") " pod="openstack/nova-metadata-0" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.376946 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e45a17a8-29f1-40e2-96ae-f2db0b32407e-config-data\") pod \"nova-metadata-0\" (UID: \"e45a17a8-29f1-40e2-96ae-f2db0b32407e\") " pod="openstack/nova-metadata-0" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.377149 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e45a17a8-29f1-40e2-96ae-f2db0b32407e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e45a17a8-29f1-40e2-96ae-f2db0b32407e\") " pod="openstack/nova-metadata-0" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.377205 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/070ff6d1-3ab4-4212-80c9-82fd3107dcd6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"070ff6d1-3ab4-4212-80c9-82fd3107dcd6\") " pod="openstack/nova-api-0" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.377271 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/070ff6d1-3ab4-4212-80c9-82fd3107dcd6-config-data\") pod \"nova-api-0\" (UID: \"070ff6d1-3ab4-4212-80c9-82fd3107dcd6\") " pod="openstack/nova-api-0" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.478605 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e45a17a8-29f1-40e2-96ae-f2db0b32407e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e45a17a8-29f1-40e2-96ae-f2db0b32407e\") " pod="openstack/nova-metadata-0" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.478648 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/070ff6d1-3ab4-4212-80c9-82fd3107dcd6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"070ff6d1-3ab4-4212-80c9-82fd3107dcd6\") " pod="openstack/nova-api-0" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.478681 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/070ff6d1-3ab4-4212-80c9-82fd3107dcd6-config-data\") pod \"nova-api-0\" (UID: \"070ff6d1-3ab4-4212-80c9-82fd3107dcd6\") " pod="openstack/nova-api-0" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.478717 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e45a17a8-29f1-40e2-96ae-f2db0b32407e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e45a17a8-29f1-40e2-96ae-f2db0b32407e\") " pod="openstack/nova-metadata-0" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.478756 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6ks8\" (UniqueName: \"kubernetes.io/projected/070ff6d1-3ab4-4212-80c9-82fd3107dcd6-kube-api-access-c6ks8\") pod \"nova-api-0\" (UID: \"070ff6d1-3ab4-4212-80c9-82fd3107dcd6\") " pod="openstack/nova-api-0" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.478777 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/070ff6d1-3ab4-4212-80c9-82fd3107dcd6-logs\") pod \"nova-api-0\" (UID: \"070ff6d1-3ab4-4212-80c9-82fd3107dcd6\") " pod="openstack/nova-api-0" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.478800 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd9p4\" (UniqueName: \"kubernetes.io/projected/e45a17a8-29f1-40e2-96ae-f2db0b32407e-kube-api-access-zd9p4\") pod \"nova-metadata-0\" (UID: \"e45a17a8-29f1-40e2-96ae-f2db0b32407e\") " pod="openstack/nova-metadata-0" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.478840 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e45a17a8-29f1-40e2-96ae-f2db0b32407e-logs\") pod \"nova-metadata-0\" (UID: \"e45a17a8-29f1-40e2-96ae-f2db0b32407e\") " pod="openstack/nova-metadata-0" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.478893 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e45a17a8-29f1-40e2-96ae-f2db0b32407e-config-data\") pod \"nova-metadata-0\" (UID: \"e45a17a8-29f1-40e2-96ae-f2db0b32407e\") " pod="openstack/nova-metadata-0" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.479597 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/070ff6d1-3ab4-4212-80c9-82fd3107dcd6-logs\") pod \"nova-api-0\" (UID: \"070ff6d1-3ab4-4212-80c9-82fd3107dcd6\") " pod="openstack/nova-api-0" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.479684 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e45a17a8-29f1-40e2-96ae-f2db0b32407e-logs\") pod \"nova-metadata-0\" (UID: \"e45a17a8-29f1-40e2-96ae-f2db0b32407e\") " pod="openstack/nova-metadata-0" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.482536 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e45a17a8-29f1-40e2-96ae-f2db0b32407e-config-data\") pod \"nova-metadata-0\" (UID: \"e45a17a8-29f1-40e2-96ae-f2db0b32407e\") " pod="openstack/nova-metadata-0" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.482668 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e45a17a8-29f1-40e2-96ae-f2db0b32407e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e45a17a8-29f1-40e2-96ae-f2db0b32407e\") " pod="openstack/nova-metadata-0" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.483162 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/070ff6d1-3ab4-4212-80c9-82fd3107dcd6-config-data\") pod \"nova-api-0\" (UID: \"070ff6d1-3ab4-4212-80c9-82fd3107dcd6\") " pod="openstack/nova-api-0" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.483607 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/070ff6d1-3ab4-4212-80c9-82fd3107dcd6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"070ff6d1-3ab4-4212-80c9-82fd3107dcd6\") " pod="openstack/nova-api-0" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.493412 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e45a17a8-29f1-40e2-96ae-f2db0b32407e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e45a17a8-29f1-40e2-96ae-f2db0b32407e\") " pod="openstack/nova-metadata-0" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.495467 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd9p4\" (UniqueName: \"kubernetes.io/projected/e45a17a8-29f1-40e2-96ae-f2db0b32407e-kube-api-access-zd9p4\") pod \"nova-metadata-0\" (UID: \"e45a17a8-29f1-40e2-96ae-f2db0b32407e\") " pod="openstack/nova-metadata-0" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.496690 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6ks8\" (UniqueName: \"kubernetes.io/projected/070ff6d1-3ab4-4212-80c9-82fd3107dcd6-kube-api-access-c6ks8\") pod \"nova-api-0\" (UID: \"070ff6d1-3ab4-4212-80c9-82fd3107dcd6\") " pod="openstack/nova-api-0" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.509610 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.518173 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.644703 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cb287d9-6fdc-4ecd-80aa-fff39ebd8679" path="/var/lib/kubelet/pods/7cb287d9-6fdc-4ecd-80aa-fff39ebd8679/volumes" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.645245 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84d03e88-8521-4da8-9d5b-28185ed47abc" path="/var/lib/kubelet/pods/84d03e88-8521-4da8-9d5b-28185ed47abc/volumes" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.645782 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e335bdf2-e70d-4140-835b-d8071700c0f6" path="/var/lib/kubelet/pods/e335bdf2-e70d-4140-835b-d8071700c0f6/volumes" Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.930090 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 15 05:54:54 crc kubenswrapper[4747]: W1215 05:54:54.934110 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod070ff6d1_3ab4_4212_80c9_82fd3107dcd6.slice/crio-daff495911359a36bc287341ecbd8196b85f4603691d503c29828b704286be53 WatchSource:0}: Error finding container daff495911359a36bc287341ecbd8196b85f4603691d503c29828b704286be53: Status 404 returned error can't find the container with id daff495911359a36bc287341ecbd8196b85f4603691d503c29828b704286be53 Dec 15 05:54:54 crc kubenswrapper[4747]: I1215 05:54:54.970853 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 15 05:54:54 crc kubenswrapper[4747]: W1215 05:54:54.973252 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode45a17a8_29f1_40e2_96ae_f2db0b32407e.slice/crio-d5a92d6fb9c6d5db30e5e6b6c9cb0b370e5fd17e99ae421261a42cc73918567f WatchSource:0}: Error finding container d5a92d6fb9c6d5db30e5e6b6c9cb0b370e5fd17e99ae421261a42cc73918567f: Status 404 returned error can't find the container with id d5a92d6fb9c6d5db30e5e6b6c9cb0b370e5fd17e99ae421261a42cc73918567f Dec 15 05:54:55 crc kubenswrapper[4747]: I1215 05:54:55.069353 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0942975b-f269-434f-8853-927e58959a1f","Type":"ContainerStarted","Data":"e20f0964c59f6bde3634ff7c142313a2d02c0af488ad408d45e15afdf8ad8fd7"} Dec 15 05:54:55 crc kubenswrapper[4747]: I1215 05:54:55.070386 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 15 05:54:55 crc kubenswrapper[4747]: I1215 05:54:55.071425 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"070ff6d1-3ab4-4212-80c9-82fd3107dcd6","Type":"ContainerStarted","Data":"daff495911359a36bc287341ecbd8196b85f4603691d503c29828b704286be53"} Dec 15 05:54:55 crc kubenswrapper[4747]: I1215 05:54:55.074181 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e45a17a8-29f1-40e2-96ae-f2db0b32407e","Type":"ContainerStarted","Data":"d5a92d6fb9c6d5db30e5e6b6c9cb0b370e5fd17e99ae421261a42cc73918567f"} Dec 15 05:54:55 crc kubenswrapper[4747]: I1215 05:54:55.077515 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"13aa9deb-71b7-4adf-858c-89c461427547","Type":"ContainerStarted","Data":"f823c1974179fb9dd374a9bc2f8ee79ae207fec3063771fea59e396462eeab3c"} Dec 15 05:54:55 crc kubenswrapper[4747]: I1215 05:54:55.116314 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.744725394 podStartE2EDuration="6.116297261s" podCreationTimestamp="2025-12-15 05:54:49 +0000 UTC" firstStartedPulling="2025-12-15 05:54:49.842774256 +0000 UTC m=+1053.539286173" lastFinishedPulling="2025-12-15 05:54:54.214346123 +0000 UTC m=+1057.910858040" observedRunningTime="2025-12-15 05:54:55.094212945 +0000 UTC m=+1058.790724862" watchObservedRunningTime="2025-12-15 05:54:55.116297261 +0000 UTC m=+1058.812809178" Dec 15 05:54:55 crc kubenswrapper[4747]: I1215 05:54:55.123433 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.123420105 podStartE2EDuration="2.123420105s" podCreationTimestamp="2025-12-15 05:54:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:54:55.114919992 +0000 UTC m=+1058.811431909" watchObservedRunningTime="2025-12-15 05:54:55.123420105 +0000 UTC m=+1058.819932022" Dec 15 05:54:56 crc kubenswrapper[4747]: I1215 05:54:56.088960 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e45a17a8-29f1-40e2-96ae-f2db0b32407e","Type":"ContainerStarted","Data":"c53d6538477d565efe74db056dfefe5ef81068bf1056d7313cadc9f73c2b4e8a"} Dec 15 05:54:56 crc kubenswrapper[4747]: I1215 05:54:56.089349 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e45a17a8-29f1-40e2-96ae-f2db0b32407e","Type":"ContainerStarted","Data":"7016e1d24ce600864941b72809635dee65035e1566e6564d682143756264b73a"} Dec 15 05:54:56 crc kubenswrapper[4747]: I1215 05:54:56.091906 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"070ff6d1-3ab4-4212-80c9-82fd3107dcd6","Type":"ContainerStarted","Data":"82f174e8db062bed9818637f1aa9f848cf6a0dd19057273377474ca6534f2b87"} Dec 15 05:54:56 crc kubenswrapper[4747]: I1215 05:54:56.091994 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"070ff6d1-3ab4-4212-80c9-82fd3107dcd6","Type":"ContainerStarted","Data":"e1b808e4d3c42b40c8dd305f102effed8dd6e99b5920edfb2f339341fd040907"} Dec 15 05:54:56 crc kubenswrapper[4747]: I1215 05:54:56.115988 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.115971624 podStartE2EDuration="2.115971624s" podCreationTimestamp="2025-12-15 05:54:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:54:56.104133073 +0000 UTC m=+1059.800644990" watchObservedRunningTime="2025-12-15 05:54:56.115971624 +0000 UTC m=+1059.812483542" Dec 15 05:54:56 crc kubenswrapper[4747]: I1215 05:54:56.135000 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.134983363 podStartE2EDuration="2.134983363s" podCreationTimestamp="2025-12-15 05:54:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:54:56.12381873 +0000 UTC m=+1059.820330646" watchObservedRunningTime="2025-12-15 05:54:56.134983363 +0000 UTC m=+1059.831495280" Dec 15 05:54:58 crc kubenswrapper[4747]: I1215 05:54:58.411534 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 15 05:54:58 crc kubenswrapper[4747]: I1215 05:54:58.865001 4747 patch_prober.go:28] interesting pod/machine-config-daemon-nldtn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 15 05:54:58 crc kubenswrapper[4747]: I1215 05:54:58.865388 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 15 05:54:58 crc kubenswrapper[4747]: I1215 05:54:58.865439 4747 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" Dec 15 05:54:58 crc kubenswrapper[4747]: I1215 05:54:58.865918 4747 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"90f12c1fab813a5975dd1bb7980ef75e3315dc4c893a83d1630f8dfbea3891d6"} pod="openshift-machine-config-operator/machine-config-daemon-nldtn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 15 05:54:58 crc kubenswrapper[4747]: I1215 05:54:58.866009 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" containerID="cri-o://90f12c1fab813a5975dd1bb7980ef75e3315dc4c893a83d1630f8dfbea3891d6" gracePeriod=600 Dec 15 05:54:59 crc kubenswrapper[4747]: I1215 05:54:59.123484 4747 generic.go:334] "Generic (PLEG): container finished" podID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerID="90f12c1fab813a5975dd1bb7980ef75e3315dc4c893a83d1630f8dfbea3891d6" exitCode=0 Dec 15 05:54:59 crc kubenswrapper[4747]: I1215 05:54:59.123541 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" event={"ID":"1d50e5c9-7ce9-40c0-b942-01031654d27c","Type":"ContainerDied","Data":"90f12c1fab813a5975dd1bb7980ef75e3315dc4c893a83d1630f8dfbea3891d6"} Dec 15 05:54:59 crc kubenswrapper[4747]: I1215 05:54:59.123582 4747 scope.go:117] "RemoveContainer" containerID="1f6be68cbfc9d5eee88cda586fa59c68181c75ecba41c64c7ee60c7ad6d664b8" Dec 15 05:54:59 crc kubenswrapper[4747]: I1215 05:54:59.518746 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 15 05:54:59 crc kubenswrapper[4747]: I1215 05:54:59.518969 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 15 05:55:00 crc kubenswrapper[4747]: I1215 05:55:00.136632 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" event={"ID":"1d50e5c9-7ce9-40c0-b942-01031654d27c","Type":"ContainerStarted","Data":"24e2ae64f4e610798e09d68f965f566dada71476cc8359af941aa647f9585c49"} Dec 15 05:55:03 crc kubenswrapper[4747]: I1215 05:55:03.411791 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 15 05:55:03 crc kubenswrapper[4747]: I1215 05:55:03.436877 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 15 05:55:04 crc kubenswrapper[4747]: I1215 05:55:04.192344 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 15 05:55:04 crc kubenswrapper[4747]: I1215 05:55:04.510912 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 15 05:55:04 crc kubenswrapper[4747]: I1215 05:55:04.511306 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 15 05:55:04 crc kubenswrapper[4747]: I1215 05:55:04.519167 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 15 05:55:04 crc kubenswrapper[4747]: I1215 05:55:04.519242 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 15 05:55:05 crc kubenswrapper[4747]: I1215 05:55:05.594071 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="070ff6d1-3ab4-4212-80c9-82fd3107dcd6" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.193:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 15 05:55:05 crc kubenswrapper[4747]: I1215 05:55:05.605048 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e45a17a8-29f1-40e2-96ae-f2db0b32407e" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 15 05:55:05 crc kubenswrapper[4747]: I1215 05:55:05.605076 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e45a17a8-29f1-40e2-96ae-f2db0b32407e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 15 05:55:05 crc kubenswrapper[4747]: I1215 05:55:05.605048 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="070ff6d1-3ab4-4212-80c9-82fd3107dcd6" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.193:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 15 05:55:14 crc kubenswrapper[4747]: I1215 05:55:14.516052 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 15 05:55:14 crc kubenswrapper[4747]: I1215 05:55:14.517254 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 15 05:55:14 crc kubenswrapper[4747]: I1215 05:55:14.518153 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 15 05:55:14 crc kubenswrapper[4747]: I1215 05:55:14.525428 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 15 05:55:14 crc kubenswrapper[4747]: I1215 05:55:14.528783 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 15 05:55:14 crc kubenswrapper[4747]: I1215 05:55:14.529138 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 15 05:55:14 crc kubenswrapper[4747]: I1215 05:55:14.532678 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 15 05:55:15 crc kubenswrapper[4747]: I1215 05:55:15.296552 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 15 05:55:15 crc kubenswrapper[4747]: I1215 05:55:15.300478 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 15 05:55:15 crc kubenswrapper[4747]: I1215 05:55:15.301952 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 15 05:55:15 crc kubenswrapper[4747]: I1215 05:55:15.464615 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-864557ccdf-8grfz"] Dec 15 05:55:15 crc kubenswrapper[4747]: I1215 05:55:15.466594 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864557ccdf-8grfz" Dec 15 05:55:15 crc kubenswrapper[4747]: I1215 05:55:15.520891 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-864557ccdf-8grfz"] Dec 15 05:55:15 crc kubenswrapper[4747]: I1215 05:55:15.602642 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fee4e87f-8126-4c7e-bbb1-898b81c4f9b0-dns-svc\") pod \"dnsmasq-dns-864557ccdf-8grfz\" (UID: \"fee4e87f-8126-4c7e-bbb1-898b81c4f9b0\") " pod="openstack/dnsmasq-dns-864557ccdf-8grfz" Dec 15 05:55:15 crc kubenswrapper[4747]: I1215 05:55:15.602859 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwrjm\" (UniqueName: \"kubernetes.io/projected/fee4e87f-8126-4c7e-bbb1-898b81c4f9b0-kube-api-access-nwrjm\") pod \"dnsmasq-dns-864557ccdf-8grfz\" (UID: \"fee4e87f-8126-4c7e-bbb1-898b81c4f9b0\") " pod="openstack/dnsmasq-dns-864557ccdf-8grfz" Dec 15 05:55:15 crc kubenswrapper[4747]: I1215 05:55:15.603861 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fee4e87f-8126-4c7e-bbb1-898b81c4f9b0-ovsdbserver-sb\") pod \"dnsmasq-dns-864557ccdf-8grfz\" (UID: \"fee4e87f-8126-4c7e-bbb1-898b81c4f9b0\") " pod="openstack/dnsmasq-dns-864557ccdf-8grfz" Dec 15 05:55:15 crc kubenswrapper[4747]: I1215 05:55:15.604005 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fee4e87f-8126-4c7e-bbb1-898b81c4f9b0-dns-swift-storage-0\") pod \"dnsmasq-dns-864557ccdf-8grfz\" (UID: \"fee4e87f-8126-4c7e-bbb1-898b81c4f9b0\") " pod="openstack/dnsmasq-dns-864557ccdf-8grfz" Dec 15 05:55:15 crc kubenswrapper[4747]: I1215 05:55:15.604037 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fee4e87f-8126-4c7e-bbb1-898b81c4f9b0-ovsdbserver-nb\") pod \"dnsmasq-dns-864557ccdf-8grfz\" (UID: \"fee4e87f-8126-4c7e-bbb1-898b81c4f9b0\") " pod="openstack/dnsmasq-dns-864557ccdf-8grfz" Dec 15 05:55:15 crc kubenswrapper[4747]: I1215 05:55:15.604055 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fee4e87f-8126-4c7e-bbb1-898b81c4f9b0-config\") pod \"dnsmasq-dns-864557ccdf-8grfz\" (UID: \"fee4e87f-8126-4c7e-bbb1-898b81c4f9b0\") " pod="openstack/dnsmasq-dns-864557ccdf-8grfz" Dec 15 05:55:15 crc kubenswrapper[4747]: I1215 05:55:15.706408 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwrjm\" (UniqueName: \"kubernetes.io/projected/fee4e87f-8126-4c7e-bbb1-898b81c4f9b0-kube-api-access-nwrjm\") pod \"dnsmasq-dns-864557ccdf-8grfz\" (UID: \"fee4e87f-8126-4c7e-bbb1-898b81c4f9b0\") " pod="openstack/dnsmasq-dns-864557ccdf-8grfz" Dec 15 05:55:15 crc kubenswrapper[4747]: I1215 05:55:15.706714 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fee4e87f-8126-4c7e-bbb1-898b81c4f9b0-ovsdbserver-sb\") pod \"dnsmasq-dns-864557ccdf-8grfz\" (UID: \"fee4e87f-8126-4c7e-bbb1-898b81c4f9b0\") " pod="openstack/dnsmasq-dns-864557ccdf-8grfz" Dec 15 05:55:15 crc kubenswrapper[4747]: I1215 05:55:15.706871 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fee4e87f-8126-4c7e-bbb1-898b81c4f9b0-config\") pod \"dnsmasq-dns-864557ccdf-8grfz\" (UID: \"fee4e87f-8126-4c7e-bbb1-898b81c4f9b0\") " pod="openstack/dnsmasq-dns-864557ccdf-8grfz" Dec 15 05:55:15 crc kubenswrapper[4747]: I1215 05:55:15.706914 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fee4e87f-8126-4c7e-bbb1-898b81c4f9b0-dns-swift-storage-0\") pod \"dnsmasq-dns-864557ccdf-8grfz\" (UID: \"fee4e87f-8126-4c7e-bbb1-898b81c4f9b0\") " pod="openstack/dnsmasq-dns-864557ccdf-8grfz" Dec 15 05:55:15 crc kubenswrapper[4747]: I1215 05:55:15.706948 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fee4e87f-8126-4c7e-bbb1-898b81c4f9b0-ovsdbserver-nb\") pod \"dnsmasq-dns-864557ccdf-8grfz\" (UID: \"fee4e87f-8126-4c7e-bbb1-898b81c4f9b0\") " pod="openstack/dnsmasq-dns-864557ccdf-8grfz" Dec 15 05:55:15 crc kubenswrapper[4747]: I1215 05:55:15.707031 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fee4e87f-8126-4c7e-bbb1-898b81c4f9b0-dns-svc\") pod \"dnsmasq-dns-864557ccdf-8grfz\" (UID: \"fee4e87f-8126-4c7e-bbb1-898b81c4f9b0\") " pod="openstack/dnsmasq-dns-864557ccdf-8grfz" Dec 15 05:55:15 crc kubenswrapper[4747]: I1215 05:55:15.708012 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fee4e87f-8126-4c7e-bbb1-898b81c4f9b0-ovsdbserver-sb\") pod \"dnsmasq-dns-864557ccdf-8grfz\" (UID: \"fee4e87f-8126-4c7e-bbb1-898b81c4f9b0\") " pod="openstack/dnsmasq-dns-864557ccdf-8grfz" Dec 15 05:55:15 crc kubenswrapper[4747]: I1215 05:55:15.708080 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fee4e87f-8126-4c7e-bbb1-898b81c4f9b0-dns-svc\") pod \"dnsmasq-dns-864557ccdf-8grfz\" (UID: \"fee4e87f-8126-4c7e-bbb1-898b81c4f9b0\") " pod="openstack/dnsmasq-dns-864557ccdf-8grfz" Dec 15 05:55:15 crc kubenswrapper[4747]: I1215 05:55:15.708611 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fee4e87f-8126-4c7e-bbb1-898b81c4f9b0-dns-swift-storage-0\") pod \"dnsmasq-dns-864557ccdf-8grfz\" (UID: \"fee4e87f-8126-4c7e-bbb1-898b81c4f9b0\") " pod="openstack/dnsmasq-dns-864557ccdf-8grfz" Dec 15 05:55:15 crc kubenswrapper[4747]: I1215 05:55:15.708723 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fee4e87f-8126-4c7e-bbb1-898b81c4f9b0-ovsdbserver-nb\") pod \"dnsmasq-dns-864557ccdf-8grfz\" (UID: \"fee4e87f-8126-4c7e-bbb1-898b81c4f9b0\") " pod="openstack/dnsmasq-dns-864557ccdf-8grfz" Dec 15 05:55:15 crc kubenswrapper[4747]: I1215 05:55:15.709158 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fee4e87f-8126-4c7e-bbb1-898b81c4f9b0-config\") pod \"dnsmasq-dns-864557ccdf-8grfz\" (UID: \"fee4e87f-8126-4c7e-bbb1-898b81c4f9b0\") " pod="openstack/dnsmasq-dns-864557ccdf-8grfz" Dec 15 05:55:15 crc kubenswrapper[4747]: I1215 05:55:15.733137 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwrjm\" (UniqueName: \"kubernetes.io/projected/fee4e87f-8126-4c7e-bbb1-898b81c4f9b0-kube-api-access-nwrjm\") pod \"dnsmasq-dns-864557ccdf-8grfz\" (UID: \"fee4e87f-8126-4c7e-bbb1-898b81c4f9b0\") " pod="openstack/dnsmasq-dns-864557ccdf-8grfz" Dec 15 05:55:15 crc kubenswrapper[4747]: I1215 05:55:15.819520 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864557ccdf-8grfz" Dec 15 05:55:16 crc kubenswrapper[4747]: I1215 05:55:16.283167 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-864557ccdf-8grfz"] Dec 15 05:55:16 crc kubenswrapper[4747]: I1215 05:55:16.308606 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864557ccdf-8grfz" event={"ID":"fee4e87f-8126-4c7e-bbb1-898b81c4f9b0","Type":"ContainerStarted","Data":"190f3ab15a83ba1313fdae20194c4aee5c486b7ce777f9bd4825a9887df3a647"} Dec 15 05:55:17 crc kubenswrapper[4747]: I1215 05:55:17.320726 4747 generic.go:334] "Generic (PLEG): container finished" podID="fee4e87f-8126-4c7e-bbb1-898b81c4f9b0" containerID="fef3cbd2a130f0a1c7e40d974ad577095cbabf939d7a0ff4f8213349e9370411" exitCode=0 Dec 15 05:55:17 crc kubenswrapper[4747]: I1215 05:55:17.320847 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864557ccdf-8grfz" event={"ID":"fee4e87f-8126-4c7e-bbb1-898b81c4f9b0","Type":"ContainerDied","Data":"fef3cbd2a130f0a1c7e40d974ad577095cbabf939d7a0ff4f8213349e9370411"} Dec 15 05:55:17 crc kubenswrapper[4747]: I1215 05:55:17.621868 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 15 05:55:17 crc kubenswrapper[4747]: I1215 05:55:17.622441 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0942975b-f269-434f-8853-927e58959a1f" containerName="ceilometer-central-agent" containerID="cri-o://b0b41407554d459aaff773e093368964bd37cb17a1be5cc34db0a9d94e6d0046" gracePeriod=30 Dec 15 05:55:17 crc kubenswrapper[4747]: I1215 05:55:17.622483 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0942975b-f269-434f-8853-927e58959a1f" containerName="sg-core" containerID="cri-o://f69b62137d4a382a55917c62c95a558a7b68ab082637a743719953bef43ef398" gracePeriod=30 Dec 15 05:55:17 crc kubenswrapper[4747]: I1215 05:55:17.622520 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0942975b-f269-434f-8853-927e58959a1f" containerName="proxy-httpd" containerID="cri-o://e20f0964c59f6bde3634ff7c142313a2d02c0af488ad408d45e15afdf8ad8fd7" gracePeriod=30 Dec 15 05:55:17 crc kubenswrapper[4747]: I1215 05:55:17.622519 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0942975b-f269-434f-8853-927e58959a1f" containerName="ceilometer-notification-agent" containerID="cri-o://c448c98e5895b0f8e6c9ee467b1cc9db2f852a18b074edbc14b08f8836c89185" gracePeriod=30 Dec 15 05:55:17 crc kubenswrapper[4747]: I1215 05:55:17.628910 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="0942975b-f269-434f-8853-927e58959a1f" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.191:3000/\": read tcp 10.217.0.2:59868->10.217.0.191:3000: read: connection reset by peer" Dec 15 05:55:18 crc kubenswrapper[4747]: I1215 05:55:18.108666 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 15 05:55:18 crc kubenswrapper[4747]: I1215 05:55:18.346824 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864557ccdf-8grfz" event={"ID":"fee4e87f-8126-4c7e-bbb1-898b81c4f9b0","Type":"ContainerStarted","Data":"9d7fe17a320d3762e3e55f99d0793a54627fe100bc799b5c6b27e20f6a69466b"} Dec 15 05:55:18 crc kubenswrapper[4747]: I1215 05:55:18.346888 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-864557ccdf-8grfz" Dec 15 05:55:18 crc kubenswrapper[4747]: I1215 05:55:18.349804 4747 generic.go:334] "Generic (PLEG): container finished" podID="0942975b-f269-434f-8853-927e58959a1f" containerID="e20f0964c59f6bde3634ff7c142313a2d02c0af488ad408d45e15afdf8ad8fd7" exitCode=0 Dec 15 05:55:18 crc kubenswrapper[4747]: I1215 05:55:18.349847 4747 generic.go:334] "Generic (PLEG): container finished" podID="0942975b-f269-434f-8853-927e58959a1f" containerID="f69b62137d4a382a55917c62c95a558a7b68ab082637a743719953bef43ef398" exitCode=2 Dec 15 05:55:18 crc kubenswrapper[4747]: I1215 05:55:18.349855 4747 generic.go:334] "Generic (PLEG): container finished" podID="0942975b-f269-434f-8853-927e58959a1f" containerID="c448c98e5895b0f8e6c9ee467b1cc9db2f852a18b074edbc14b08f8836c89185" exitCode=0 Dec 15 05:55:18 crc kubenswrapper[4747]: I1215 05:55:18.349864 4747 generic.go:334] "Generic (PLEG): container finished" podID="0942975b-f269-434f-8853-927e58959a1f" containerID="b0b41407554d459aaff773e093368964bd37cb17a1be5cc34db0a9d94e6d0046" exitCode=0 Dec 15 05:55:18 crc kubenswrapper[4747]: I1215 05:55:18.349873 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0942975b-f269-434f-8853-927e58959a1f","Type":"ContainerDied","Data":"e20f0964c59f6bde3634ff7c142313a2d02c0af488ad408d45e15afdf8ad8fd7"} Dec 15 05:55:18 crc kubenswrapper[4747]: I1215 05:55:18.349916 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0942975b-f269-434f-8853-927e58959a1f","Type":"ContainerDied","Data":"f69b62137d4a382a55917c62c95a558a7b68ab082637a743719953bef43ef398"} Dec 15 05:55:18 crc kubenswrapper[4747]: I1215 05:55:18.349945 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0942975b-f269-434f-8853-927e58959a1f","Type":"ContainerDied","Data":"c448c98e5895b0f8e6c9ee467b1cc9db2f852a18b074edbc14b08f8836c89185"} Dec 15 05:55:18 crc kubenswrapper[4747]: I1215 05:55:18.349955 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0942975b-f269-434f-8853-927e58959a1f","Type":"ContainerDied","Data":"b0b41407554d459aaff773e093368964bd37cb17a1be5cc34db0a9d94e6d0046"} Dec 15 05:55:18 crc kubenswrapper[4747]: I1215 05:55:18.350098 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="070ff6d1-3ab4-4212-80c9-82fd3107dcd6" containerName="nova-api-log" containerID="cri-o://e1b808e4d3c42b40c8dd305f102effed8dd6e99b5920edfb2f339341fd040907" gracePeriod=30 Dec 15 05:55:18 crc kubenswrapper[4747]: I1215 05:55:18.350149 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="070ff6d1-3ab4-4212-80c9-82fd3107dcd6" containerName="nova-api-api" containerID="cri-o://82f174e8db062bed9818637f1aa9f848cf6a0dd19057273377474ca6534f2b87" gracePeriod=30 Dec 15 05:55:18 crc kubenswrapper[4747]: I1215 05:55:18.368346 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-864557ccdf-8grfz" podStartSLOduration=3.368330898 podStartE2EDuration="3.368330898s" podCreationTimestamp="2025-12-15 05:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:55:18.367362167 +0000 UTC m=+1082.063874083" watchObservedRunningTime="2025-12-15 05:55:18.368330898 +0000 UTC m=+1082.064842815" Dec 15 05:55:18 crc kubenswrapper[4747]: I1215 05:55:18.436055 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 15 05:55:18 crc kubenswrapper[4747]: I1215 05:55:18.585295 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0942975b-f269-434f-8853-927e58959a1f-scripts\") pod \"0942975b-f269-434f-8853-927e58959a1f\" (UID: \"0942975b-f269-434f-8853-927e58959a1f\") " Dec 15 05:55:18 crc kubenswrapper[4747]: I1215 05:55:18.585542 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0942975b-f269-434f-8853-927e58959a1f-ceilometer-tls-certs\") pod \"0942975b-f269-434f-8853-927e58959a1f\" (UID: \"0942975b-f269-434f-8853-927e58959a1f\") " Dec 15 05:55:18 crc kubenswrapper[4747]: I1215 05:55:18.585627 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0942975b-f269-434f-8853-927e58959a1f-combined-ca-bundle\") pod \"0942975b-f269-434f-8853-927e58959a1f\" (UID: \"0942975b-f269-434f-8853-927e58959a1f\") " Dec 15 05:55:18 crc kubenswrapper[4747]: I1215 05:55:18.585679 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0942975b-f269-434f-8853-927e58959a1f-run-httpd\") pod \"0942975b-f269-434f-8853-927e58959a1f\" (UID: \"0942975b-f269-434f-8853-927e58959a1f\") " Dec 15 05:55:18 crc kubenswrapper[4747]: I1215 05:55:18.585760 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phtcs\" (UniqueName: \"kubernetes.io/projected/0942975b-f269-434f-8853-927e58959a1f-kube-api-access-phtcs\") pod \"0942975b-f269-434f-8853-927e58959a1f\" (UID: \"0942975b-f269-434f-8853-927e58959a1f\") " Dec 15 05:55:18 crc kubenswrapper[4747]: I1215 05:55:18.585909 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0942975b-f269-434f-8853-927e58959a1f-sg-core-conf-yaml\") pod \"0942975b-f269-434f-8853-927e58959a1f\" (UID: \"0942975b-f269-434f-8853-927e58959a1f\") " Dec 15 05:55:18 crc kubenswrapper[4747]: I1215 05:55:18.585981 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0942975b-f269-434f-8853-927e58959a1f-log-httpd\") pod \"0942975b-f269-434f-8853-927e58959a1f\" (UID: \"0942975b-f269-434f-8853-927e58959a1f\") " Dec 15 05:55:18 crc kubenswrapper[4747]: I1215 05:55:18.586049 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0942975b-f269-434f-8853-927e58959a1f-config-data\") pod \"0942975b-f269-434f-8853-927e58959a1f\" (UID: \"0942975b-f269-434f-8853-927e58959a1f\") " Dec 15 05:55:18 crc kubenswrapper[4747]: I1215 05:55:18.586761 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0942975b-f269-434f-8853-927e58959a1f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0942975b-f269-434f-8853-927e58959a1f" (UID: "0942975b-f269-434f-8853-927e58959a1f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:55:18 crc kubenswrapper[4747]: I1215 05:55:18.586807 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0942975b-f269-434f-8853-927e58959a1f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0942975b-f269-434f-8853-927e58959a1f" (UID: "0942975b-f269-434f-8853-927e58959a1f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:55:18 crc kubenswrapper[4747]: I1215 05:55:18.588606 4747 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0942975b-f269-434f-8853-927e58959a1f-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 15 05:55:18 crc kubenswrapper[4747]: I1215 05:55:18.588913 4747 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0942975b-f269-434f-8853-927e58959a1f-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 15 05:55:18 crc kubenswrapper[4747]: I1215 05:55:18.595420 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0942975b-f269-434f-8853-927e58959a1f-kube-api-access-phtcs" (OuterVolumeSpecName: "kube-api-access-phtcs") pod "0942975b-f269-434f-8853-927e58959a1f" (UID: "0942975b-f269-434f-8853-927e58959a1f"). InnerVolumeSpecName "kube-api-access-phtcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:55:18 crc kubenswrapper[4747]: I1215 05:55:18.604510 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0942975b-f269-434f-8853-927e58959a1f-scripts" (OuterVolumeSpecName: "scripts") pod "0942975b-f269-434f-8853-927e58959a1f" (UID: "0942975b-f269-434f-8853-927e58959a1f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:55:18 crc kubenswrapper[4747]: I1215 05:55:18.630415 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0942975b-f269-434f-8853-927e58959a1f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0942975b-f269-434f-8853-927e58959a1f" (UID: "0942975b-f269-434f-8853-927e58959a1f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:55:18 crc kubenswrapper[4747]: I1215 05:55:18.643773 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0942975b-f269-434f-8853-927e58959a1f-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "0942975b-f269-434f-8853-927e58959a1f" (UID: "0942975b-f269-434f-8853-927e58959a1f"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:55:18 crc kubenswrapper[4747]: I1215 05:55:18.663605 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0942975b-f269-434f-8853-927e58959a1f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0942975b-f269-434f-8853-927e58959a1f" (UID: "0942975b-f269-434f-8853-927e58959a1f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:55:18 crc kubenswrapper[4747]: I1215 05:55:18.678578 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0942975b-f269-434f-8853-927e58959a1f-config-data" (OuterVolumeSpecName: "config-data") pod "0942975b-f269-434f-8853-927e58959a1f" (UID: "0942975b-f269-434f-8853-927e58959a1f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:55:18 crc kubenswrapper[4747]: I1215 05:55:18.692132 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0942975b-f269-434f-8853-927e58959a1f-scripts\") on node \"crc\" DevicePath \"\"" Dec 15 05:55:18 crc kubenswrapper[4747]: I1215 05:55:18.692158 4747 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0942975b-f269-434f-8853-927e58959a1f-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 15 05:55:18 crc kubenswrapper[4747]: I1215 05:55:18.692170 4747 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0942975b-f269-434f-8853-927e58959a1f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:55:18 crc kubenswrapper[4747]: I1215 05:55:18.692179 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phtcs\" (UniqueName: \"kubernetes.io/projected/0942975b-f269-434f-8853-927e58959a1f-kube-api-access-phtcs\") on node \"crc\" DevicePath \"\"" Dec 15 05:55:18 crc kubenswrapper[4747]: I1215 05:55:18.692188 4747 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0942975b-f269-434f-8853-927e58959a1f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 15 05:55:18 crc kubenswrapper[4747]: I1215 05:55:18.692197 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0942975b-f269-434f-8853-927e58959a1f-config-data\") on node \"crc\" DevicePath \"\"" Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.364735 4747 generic.go:334] "Generic (PLEG): container finished" podID="070ff6d1-3ab4-4212-80c9-82fd3107dcd6" containerID="e1b808e4d3c42b40c8dd305f102effed8dd6e99b5920edfb2f339341fd040907" exitCode=143 Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.364825 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"070ff6d1-3ab4-4212-80c9-82fd3107dcd6","Type":"ContainerDied","Data":"e1b808e4d3c42b40c8dd305f102effed8dd6e99b5920edfb2f339341fd040907"} Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.368633 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0942975b-f269-434f-8853-927e58959a1f","Type":"ContainerDied","Data":"f937e0c6bb2adac5b06b7a66baae5722c1acb2dabb0ad0e27ed954143a0be37c"} Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.368712 4747 scope.go:117] "RemoveContainer" containerID="e20f0964c59f6bde3634ff7c142313a2d02c0af488ad408d45e15afdf8ad8fd7" Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.368729 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.413635 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.418392 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.422536 4747 scope.go:117] "RemoveContainer" containerID="f69b62137d4a382a55917c62c95a558a7b68ab082637a743719953bef43ef398" Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.431550 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 15 05:55:19 crc kubenswrapper[4747]: E1215 05:55:19.432492 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0942975b-f269-434f-8853-927e58959a1f" containerName="ceilometer-central-agent" Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.432514 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="0942975b-f269-434f-8853-927e58959a1f" containerName="ceilometer-central-agent" Dec 15 05:55:19 crc kubenswrapper[4747]: E1215 05:55:19.432536 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0942975b-f269-434f-8853-927e58959a1f" containerName="ceilometer-notification-agent" Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.432544 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="0942975b-f269-434f-8853-927e58959a1f" containerName="ceilometer-notification-agent" Dec 15 05:55:19 crc kubenswrapper[4747]: E1215 05:55:19.432564 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0942975b-f269-434f-8853-927e58959a1f" containerName="sg-core" Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.432570 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="0942975b-f269-434f-8853-927e58959a1f" containerName="sg-core" Dec 15 05:55:19 crc kubenswrapper[4747]: E1215 05:55:19.432587 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0942975b-f269-434f-8853-927e58959a1f" containerName="proxy-httpd" Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.432593 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="0942975b-f269-434f-8853-927e58959a1f" containerName="proxy-httpd" Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.432763 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="0942975b-f269-434f-8853-927e58959a1f" containerName="proxy-httpd" Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.432782 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="0942975b-f269-434f-8853-927e58959a1f" containerName="sg-core" Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.432795 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="0942975b-f269-434f-8853-927e58959a1f" containerName="ceilometer-notification-agent" Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.432811 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="0942975b-f269-434f-8853-927e58959a1f" containerName="ceilometer-central-agent" Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.434452 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.439750 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.439886 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.439990 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.440613 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.443323 4747 scope.go:117] "RemoveContainer" containerID="c448c98e5895b0f8e6c9ee467b1cc9db2f852a18b074edbc14b08f8836c89185" Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.472953 4747 scope.go:117] "RemoveContainer" containerID="b0b41407554d459aaff773e093368964bd37cb17a1be5cc34db0a9d94e6d0046" Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.611073 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3af61ae2-3917-4f5c-b443-e2b28d633424-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3af61ae2-3917-4f5c-b443-e2b28d633424\") " pod="openstack/ceilometer-0" Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.611146 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3af61ae2-3917-4f5c-b443-e2b28d633424-config-data\") pod \"ceilometer-0\" (UID: \"3af61ae2-3917-4f5c-b443-e2b28d633424\") " pod="openstack/ceilometer-0" Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.611209 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3af61ae2-3917-4f5c-b443-e2b28d633424-run-httpd\") pod \"ceilometer-0\" (UID: \"3af61ae2-3917-4f5c-b443-e2b28d633424\") " pod="openstack/ceilometer-0" Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.611228 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3af61ae2-3917-4f5c-b443-e2b28d633424-scripts\") pod \"ceilometer-0\" (UID: \"3af61ae2-3917-4f5c-b443-e2b28d633424\") " pod="openstack/ceilometer-0" Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.611324 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfz6b\" (UniqueName: \"kubernetes.io/projected/3af61ae2-3917-4f5c-b443-e2b28d633424-kube-api-access-pfz6b\") pod \"ceilometer-0\" (UID: \"3af61ae2-3917-4f5c-b443-e2b28d633424\") " pod="openstack/ceilometer-0" Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.611575 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3af61ae2-3917-4f5c-b443-e2b28d633424-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3af61ae2-3917-4f5c-b443-e2b28d633424\") " pod="openstack/ceilometer-0" Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.611699 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3af61ae2-3917-4f5c-b443-e2b28d633424-log-httpd\") pod \"ceilometer-0\" (UID: \"3af61ae2-3917-4f5c-b443-e2b28d633424\") " pod="openstack/ceilometer-0" Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.611759 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3af61ae2-3917-4f5c-b443-e2b28d633424-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3af61ae2-3917-4f5c-b443-e2b28d633424\") " pod="openstack/ceilometer-0" Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.713920 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3af61ae2-3917-4f5c-b443-e2b28d633424-run-httpd\") pod \"ceilometer-0\" (UID: \"3af61ae2-3917-4f5c-b443-e2b28d633424\") " pod="openstack/ceilometer-0" Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.714002 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3af61ae2-3917-4f5c-b443-e2b28d633424-scripts\") pod \"ceilometer-0\" (UID: \"3af61ae2-3917-4f5c-b443-e2b28d633424\") " pod="openstack/ceilometer-0" Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.714035 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfz6b\" (UniqueName: \"kubernetes.io/projected/3af61ae2-3917-4f5c-b443-e2b28d633424-kube-api-access-pfz6b\") pod \"ceilometer-0\" (UID: \"3af61ae2-3917-4f5c-b443-e2b28d633424\") " pod="openstack/ceilometer-0" Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.714130 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3af61ae2-3917-4f5c-b443-e2b28d633424-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3af61ae2-3917-4f5c-b443-e2b28d633424\") " pod="openstack/ceilometer-0" Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.714182 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3af61ae2-3917-4f5c-b443-e2b28d633424-log-httpd\") pod \"ceilometer-0\" (UID: \"3af61ae2-3917-4f5c-b443-e2b28d633424\") " pod="openstack/ceilometer-0" Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.714216 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3af61ae2-3917-4f5c-b443-e2b28d633424-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3af61ae2-3917-4f5c-b443-e2b28d633424\") " pod="openstack/ceilometer-0" Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.714267 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3af61ae2-3917-4f5c-b443-e2b28d633424-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3af61ae2-3917-4f5c-b443-e2b28d633424\") " pod="openstack/ceilometer-0" Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.714318 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3af61ae2-3917-4f5c-b443-e2b28d633424-config-data\") pod \"ceilometer-0\" (UID: \"3af61ae2-3917-4f5c-b443-e2b28d633424\") " pod="openstack/ceilometer-0" Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.714487 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3af61ae2-3917-4f5c-b443-e2b28d633424-run-httpd\") pod \"ceilometer-0\" (UID: \"3af61ae2-3917-4f5c-b443-e2b28d633424\") " pod="openstack/ceilometer-0" Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.714761 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3af61ae2-3917-4f5c-b443-e2b28d633424-log-httpd\") pod \"ceilometer-0\" (UID: \"3af61ae2-3917-4f5c-b443-e2b28d633424\") " pod="openstack/ceilometer-0" Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.721311 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3af61ae2-3917-4f5c-b443-e2b28d633424-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3af61ae2-3917-4f5c-b443-e2b28d633424\") " pod="openstack/ceilometer-0" Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.721458 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3af61ae2-3917-4f5c-b443-e2b28d633424-config-data\") pod \"ceilometer-0\" (UID: \"3af61ae2-3917-4f5c-b443-e2b28d633424\") " pod="openstack/ceilometer-0" Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.721505 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3af61ae2-3917-4f5c-b443-e2b28d633424-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3af61ae2-3917-4f5c-b443-e2b28d633424\") " pod="openstack/ceilometer-0" Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.722412 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3af61ae2-3917-4f5c-b443-e2b28d633424-scripts\") pod \"ceilometer-0\" (UID: \"3af61ae2-3917-4f5c-b443-e2b28d633424\") " pod="openstack/ceilometer-0" Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.723365 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3af61ae2-3917-4f5c-b443-e2b28d633424-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3af61ae2-3917-4f5c-b443-e2b28d633424\") " pod="openstack/ceilometer-0" Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.734381 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfz6b\" (UniqueName: \"kubernetes.io/projected/3af61ae2-3917-4f5c-b443-e2b28d633424-kube-api-access-pfz6b\") pod \"ceilometer-0\" (UID: \"3af61ae2-3917-4f5c-b443-e2b28d633424\") " pod="openstack/ceilometer-0" Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.748484 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 15 05:55:19 crc kubenswrapper[4747]: I1215 05:55:19.869676 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 15 05:55:20 crc kubenswrapper[4747]: I1215 05:55:20.179741 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 15 05:55:20 crc kubenswrapper[4747]: W1215 05:55:20.187355 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3af61ae2_3917_4f5c_b443_e2b28d633424.slice/crio-e34b335073599b030c0bf712ef1023291a1aa254b3b96437d1f49372f0374cec WatchSource:0}: Error finding container e34b335073599b030c0bf712ef1023291a1aa254b3b96437d1f49372f0374cec: Status 404 returned error can't find the container with id e34b335073599b030c0bf712ef1023291a1aa254b3b96437d1f49372f0374cec Dec 15 05:55:20 crc kubenswrapper[4747]: I1215 05:55:20.379973 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3af61ae2-3917-4f5c-b443-e2b28d633424","Type":"ContainerStarted","Data":"e34b335073599b030c0bf712ef1023291a1aa254b3b96437d1f49372f0374cec"} Dec 15 05:55:20 crc kubenswrapper[4747]: I1215 05:55:20.639318 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0942975b-f269-434f-8853-927e58959a1f" path="/var/lib/kubelet/pods/0942975b-f269-434f-8853-927e58959a1f/volumes" Dec 15 05:55:21 crc kubenswrapper[4747]: I1215 05:55:21.396144 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3af61ae2-3917-4f5c-b443-e2b28d633424","Type":"ContainerStarted","Data":"e779a5c8b92578d18727b03984d92b2d45e4422f99506e480307fd18f836945e"} Dec 15 05:55:21 crc kubenswrapper[4747]: I1215 05:55:21.916199 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 15 05:55:21 crc kubenswrapper[4747]: I1215 05:55:21.966633 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6ks8\" (UniqueName: \"kubernetes.io/projected/070ff6d1-3ab4-4212-80c9-82fd3107dcd6-kube-api-access-c6ks8\") pod \"070ff6d1-3ab4-4212-80c9-82fd3107dcd6\" (UID: \"070ff6d1-3ab4-4212-80c9-82fd3107dcd6\") " Dec 15 05:55:21 crc kubenswrapper[4747]: I1215 05:55:21.966706 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/070ff6d1-3ab4-4212-80c9-82fd3107dcd6-combined-ca-bundle\") pod \"070ff6d1-3ab4-4212-80c9-82fd3107dcd6\" (UID: \"070ff6d1-3ab4-4212-80c9-82fd3107dcd6\") " Dec 15 05:55:21 crc kubenswrapper[4747]: I1215 05:55:21.966810 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/070ff6d1-3ab4-4212-80c9-82fd3107dcd6-logs\") pod \"070ff6d1-3ab4-4212-80c9-82fd3107dcd6\" (UID: \"070ff6d1-3ab4-4212-80c9-82fd3107dcd6\") " Dec 15 05:55:21 crc kubenswrapper[4747]: I1215 05:55:21.966848 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/070ff6d1-3ab4-4212-80c9-82fd3107dcd6-config-data\") pod \"070ff6d1-3ab4-4212-80c9-82fd3107dcd6\" (UID: \"070ff6d1-3ab4-4212-80c9-82fd3107dcd6\") " Dec 15 05:55:21 crc kubenswrapper[4747]: I1215 05:55:21.970015 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/070ff6d1-3ab4-4212-80c9-82fd3107dcd6-logs" (OuterVolumeSpecName: "logs") pod "070ff6d1-3ab4-4212-80c9-82fd3107dcd6" (UID: "070ff6d1-3ab4-4212-80c9-82fd3107dcd6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:55:21 crc kubenswrapper[4747]: I1215 05:55:21.972417 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/070ff6d1-3ab4-4212-80c9-82fd3107dcd6-kube-api-access-c6ks8" (OuterVolumeSpecName: "kube-api-access-c6ks8") pod "070ff6d1-3ab4-4212-80c9-82fd3107dcd6" (UID: "070ff6d1-3ab4-4212-80c9-82fd3107dcd6"). InnerVolumeSpecName "kube-api-access-c6ks8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:55:21 crc kubenswrapper[4747]: I1215 05:55:21.973741 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6ks8\" (UniqueName: \"kubernetes.io/projected/070ff6d1-3ab4-4212-80c9-82fd3107dcd6-kube-api-access-c6ks8\") on node \"crc\" DevicePath \"\"" Dec 15 05:55:21 crc kubenswrapper[4747]: I1215 05:55:21.973777 4747 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/070ff6d1-3ab4-4212-80c9-82fd3107dcd6-logs\") on node \"crc\" DevicePath \"\"" Dec 15 05:55:22 crc kubenswrapper[4747]: I1215 05:55:22.005059 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/070ff6d1-3ab4-4212-80c9-82fd3107dcd6-config-data" (OuterVolumeSpecName: "config-data") pod "070ff6d1-3ab4-4212-80c9-82fd3107dcd6" (UID: "070ff6d1-3ab4-4212-80c9-82fd3107dcd6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:55:22 crc kubenswrapper[4747]: I1215 05:55:22.027879 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/070ff6d1-3ab4-4212-80c9-82fd3107dcd6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "070ff6d1-3ab4-4212-80c9-82fd3107dcd6" (UID: "070ff6d1-3ab4-4212-80c9-82fd3107dcd6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:55:22 crc kubenswrapper[4747]: I1215 05:55:22.077053 4747 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/070ff6d1-3ab4-4212-80c9-82fd3107dcd6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:55:22 crc kubenswrapper[4747]: I1215 05:55:22.077439 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/070ff6d1-3ab4-4212-80c9-82fd3107dcd6-config-data\") on node \"crc\" DevicePath \"\"" Dec 15 05:55:22 crc kubenswrapper[4747]: I1215 05:55:22.409190 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3af61ae2-3917-4f5c-b443-e2b28d633424","Type":"ContainerStarted","Data":"a6f404c7383573b683f00b082ee87a8acd01a4b6be8bd67a19fa8b7184ccca4f"} Dec 15 05:55:22 crc kubenswrapper[4747]: I1215 05:55:22.412067 4747 generic.go:334] "Generic (PLEG): container finished" podID="070ff6d1-3ab4-4212-80c9-82fd3107dcd6" containerID="82f174e8db062bed9818637f1aa9f848cf6a0dd19057273377474ca6534f2b87" exitCode=0 Dec 15 05:55:22 crc kubenswrapper[4747]: I1215 05:55:22.412144 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"070ff6d1-3ab4-4212-80c9-82fd3107dcd6","Type":"ContainerDied","Data":"82f174e8db062bed9818637f1aa9f848cf6a0dd19057273377474ca6534f2b87"} Dec 15 05:55:22 crc kubenswrapper[4747]: I1215 05:55:22.412187 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"070ff6d1-3ab4-4212-80c9-82fd3107dcd6","Type":"ContainerDied","Data":"daff495911359a36bc287341ecbd8196b85f4603691d503c29828b704286be53"} Dec 15 05:55:22 crc kubenswrapper[4747]: I1215 05:55:22.412222 4747 scope.go:117] "RemoveContainer" containerID="82f174e8db062bed9818637f1aa9f848cf6a0dd19057273377474ca6534f2b87" Dec 15 05:55:22 crc kubenswrapper[4747]: I1215 05:55:22.412470 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 15 05:55:22 crc kubenswrapper[4747]: I1215 05:55:22.434160 4747 scope.go:117] "RemoveContainer" containerID="e1b808e4d3c42b40c8dd305f102effed8dd6e99b5920edfb2f339341fd040907" Dec 15 05:55:22 crc kubenswrapper[4747]: I1215 05:55:22.442700 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 15 05:55:22 crc kubenswrapper[4747]: I1215 05:55:22.455837 4747 scope.go:117] "RemoveContainer" containerID="82f174e8db062bed9818637f1aa9f848cf6a0dd19057273377474ca6534f2b87" Dec 15 05:55:22 crc kubenswrapper[4747]: E1215 05:55:22.456720 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82f174e8db062bed9818637f1aa9f848cf6a0dd19057273377474ca6534f2b87\": container with ID starting with 82f174e8db062bed9818637f1aa9f848cf6a0dd19057273377474ca6534f2b87 not found: ID does not exist" containerID="82f174e8db062bed9818637f1aa9f848cf6a0dd19057273377474ca6534f2b87" Dec 15 05:55:22 crc kubenswrapper[4747]: I1215 05:55:22.456755 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82f174e8db062bed9818637f1aa9f848cf6a0dd19057273377474ca6534f2b87"} err="failed to get container status \"82f174e8db062bed9818637f1aa9f848cf6a0dd19057273377474ca6534f2b87\": rpc error: code = NotFound desc = could not find container \"82f174e8db062bed9818637f1aa9f848cf6a0dd19057273377474ca6534f2b87\": container with ID starting with 82f174e8db062bed9818637f1aa9f848cf6a0dd19057273377474ca6534f2b87 not found: ID does not exist" Dec 15 05:55:22 crc kubenswrapper[4747]: I1215 05:55:22.456781 4747 scope.go:117] "RemoveContainer" containerID="e1b808e4d3c42b40c8dd305f102effed8dd6e99b5920edfb2f339341fd040907" Dec 15 05:55:22 crc kubenswrapper[4747]: E1215 05:55:22.457122 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1b808e4d3c42b40c8dd305f102effed8dd6e99b5920edfb2f339341fd040907\": container with ID starting with e1b808e4d3c42b40c8dd305f102effed8dd6e99b5920edfb2f339341fd040907 not found: ID does not exist" containerID="e1b808e4d3c42b40c8dd305f102effed8dd6e99b5920edfb2f339341fd040907" Dec 15 05:55:22 crc kubenswrapper[4747]: I1215 05:55:22.457158 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1b808e4d3c42b40c8dd305f102effed8dd6e99b5920edfb2f339341fd040907"} err="failed to get container status \"e1b808e4d3c42b40c8dd305f102effed8dd6e99b5920edfb2f339341fd040907\": rpc error: code = NotFound desc = could not find container \"e1b808e4d3c42b40c8dd305f102effed8dd6e99b5920edfb2f339341fd040907\": container with ID starting with e1b808e4d3c42b40c8dd305f102effed8dd6e99b5920edfb2f339341fd040907 not found: ID does not exist" Dec 15 05:55:22 crc kubenswrapper[4747]: I1215 05:55:22.461785 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 15 05:55:22 crc kubenswrapper[4747]: I1215 05:55:22.474422 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 15 05:55:22 crc kubenswrapper[4747]: E1215 05:55:22.475013 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="070ff6d1-3ab4-4212-80c9-82fd3107dcd6" containerName="nova-api-api" Dec 15 05:55:22 crc kubenswrapper[4747]: I1215 05:55:22.475035 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="070ff6d1-3ab4-4212-80c9-82fd3107dcd6" containerName="nova-api-api" Dec 15 05:55:22 crc kubenswrapper[4747]: E1215 05:55:22.475053 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="070ff6d1-3ab4-4212-80c9-82fd3107dcd6" containerName="nova-api-log" Dec 15 05:55:22 crc kubenswrapper[4747]: I1215 05:55:22.475059 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="070ff6d1-3ab4-4212-80c9-82fd3107dcd6" containerName="nova-api-log" Dec 15 05:55:22 crc kubenswrapper[4747]: I1215 05:55:22.475288 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="070ff6d1-3ab4-4212-80c9-82fd3107dcd6" containerName="nova-api-log" Dec 15 05:55:22 crc kubenswrapper[4747]: I1215 05:55:22.475319 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="070ff6d1-3ab4-4212-80c9-82fd3107dcd6" containerName="nova-api-api" Dec 15 05:55:22 crc kubenswrapper[4747]: I1215 05:55:22.476532 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 15 05:55:22 crc kubenswrapper[4747]: I1215 05:55:22.480285 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 15 05:55:22 crc kubenswrapper[4747]: I1215 05:55:22.480588 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 15 05:55:22 crc kubenswrapper[4747]: I1215 05:55:22.480840 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 15 05:55:22 crc kubenswrapper[4747]: I1215 05:55:22.494567 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 15 05:55:22 crc kubenswrapper[4747]: I1215 05:55:22.588471 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d859c86-54f1-459b-82a5-1ed6739f42f9-public-tls-certs\") pod \"nova-api-0\" (UID: \"7d859c86-54f1-459b-82a5-1ed6739f42f9\") " pod="openstack/nova-api-0" Dec 15 05:55:22 crc kubenswrapper[4747]: I1215 05:55:22.588520 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d859c86-54f1-459b-82a5-1ed6739f42f9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7d859c86-54f1-459b-82a5-1ed6739f42f9\") " pod="openstack/nova-api-0" Dec 15 05:55:22 crc kubenswrapper[4747]: I1215 05:55:22.588558 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d859c86-54f1-459b-82a5-1ed6739f42f9-logs\") pod \"nova-api-0\" (UID: \"7d859c86-54f1-459b-82a5-1ed6739f42f9\") " pod="openstack/nova-api-0" Dec 15 05:55:22 crc kubenswrapper[4747]: I1215 05:55:22.588654 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8svn8\" (UniqueName: \"kubernetes.io/projected/7d859c86-54f1-459b-82a5-1ed6739f42f9-kube-api-access-8svn8\") pod \"nova-api-0\" (UID: \"7d859c86-54f1-459b-82a5-1ed6739f42f9\") " pod="openstack/nova-api-0" Dec 15 05:55:22 crc kubenswrapper[4747]: I1215 05:55:22.588719 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d859c86-54f1-459b-82a5-1ed6739f42f9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7d859c86-54f1-459b-82a5-1ed6739f42f9\") " pod="openstack/nova-api-0" Dec 15 05:55:22 crc kubenswrapper[4747]: I1215 05:55:22.588816 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d859c86-54f1-459b-82a5-1ed6739f42f9-config-data\") pod \"nova-api-0\" (UID: \"7d859c86-54f1-459b-82a5-1ed6739f42f9\") " pod="openstack/nova-api-0" Dec 15 05:55:22 crc kubenswrapper[4747]: I1215 05:55:22.637707 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="070ff6d1-3ab4-4212-80c9-82fd3107dcd6" path="/var/lib/kubelet/pods/070ff6d1-3ab4-4212-80c9-82fd3107dcd6/volumes" Dec 15 05:55:22 crc kubenswrapper[4747]: I1215 05:55:22.690262 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8svn8\" (UniqueName: \"kubernetes.io/projected/7d859c86-54f1-459b-82a5-1ed6739f42f9-kube-api-access-8svn8\") pod \"nova-api-0\" (UID: \"7d859c86-54f1-459b-82a5-1ed6739f42f9\") " pod="openstack/nova-api-0" Dec 15 05:55:22 crc kubenswrapper[4747]: I1215 05:55:22.690334 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d859c86-54f1-459b-82a5-1ed6739f42f9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7d859c86-54f1-459b-82a5-1ed6739f42f9\") " pod="openstack/nova-api-0" Dec 15 05:55:22 crc kubenswrapper[4747]: I1215 05:55:22.690409 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d859c86-54f1-459b-82a5-1ed6739f42f9-config-data\") pod \"nova-api-0\" (UID: \"7d859c86-54f1-459b-82a5-1ed6739f42f9\") " pod="openstack/nova-api-0" Dec 15 05:55:22 crc kubenswrapper[4747]: I1215 05:55:22.690441 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d859c86-54f1-459b-82a5-1ed6739f42f9-public-tls-certs\") pod \"nova-api-0\" (UID: \"7d859c86-54f1-459b-82a5-1ed6739f42f9\") " pod="openstack/nova-api-0" Dec 15 05:55:22 crc kubenswrapper[4747]: I1215 05:55:22.690461 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d859c86-54f1-459b-82a5-1ed6739f42f9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7d859c86-54f1-459b-82a5-1ed6739f42f9\") " pod="openstack/nova-api-0" Dec 15 05:55:22 crc kubenswrapper[4747]: I1215 05:55:22.690483 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d859c86-54f1-459b-82a5-1ed6739f42f9-logs\") pod \"nova-api-0\" (UID: \"7d859c86-54f1-459b-82a5-1ed6739f42f9\") " pod="openstack/nova-api-0" Dec 15 05:55:22 crc kubenswrapper[4747]: I1215 05:55:22.690821 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d859c86-54f1-459b-82a5-1ed6739f42f9-logs\") pod \"nova-api-0\" (UID: \"7d859c86-54f1-459b-82a5-1ed6739f42f9\") " pod="openstack/nova-api-0" Dec 15 05:55:22 crc kubenswrapper[4747]: I1215 05:55:22.694982 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d859c86-54f1-459b-82a5-1ed6739f42f9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7d859c86-54f1-459b-82a5-1ed6739f42f9\") " pod="openstack/nova-api-0" Dec 15 05:55:22 crc kubenswrapper[4747]: I1215 05:55:22.695402 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d859c86-54f1-459b-82a5-1ed6739f42f9-public-tls-certs\") pod \"nova-api-0\" (UID: \"7d859c86-54f1-459b-82a5-1ed6739f42f9\") " pod="openstack/nova-api-0" Dec 15 05:55:22 crc kubenswrapper[4747]: I1215 05:55:22.695399 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d859c86-54f1-459b-82a5-1ed6739f42f9-config-data\") pod \"nova-api-0\" (UID: \"7d859c86-54f1-459b-82a5-1ed6739f42f9\") " pod="openstack/nova-api-0" Dec 15 05:55:22 crc kubenswrapper[4747]: I1215 05:55:22.696441 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d859c86-54f1-459b-82a5-1ed6739f42f9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7d859c86-54f1-459b-82a5-1ed6739f42f9\") " pod="openstack/nova-api-0" Dec 15 05:55:22 crc kubenswrapper[4747]: I1215 05:55:22.705052 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8svn8\" (UniqueName: \"kubernetes.io/projected/7d859c86-54f1-459b-82a5-1ed6739f42f9-kube-api-access-8svn8\") pod \"nova-api-0\" (UID: \"7d859c86-54f1-459b-82a5-1ed6739f42f9\") " pod="openstack/nova-api-0" Dec 15 05:55:22 crc kubenswrapper[4747]: I1215 05:55:22.792962 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 15 05:55:23 crc kubenswrapper[4747]: I1215 05:55:23.210130 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 15 05:55:23 crc kubenswrapper[4747]: I1215 05:55:23.428994 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7d859c86-54f1-459b-82a5-1ed6739f42f9","Type":"ContainerStarted","Data":"ba97c3c049593136f6e5964080d3f706bbd23af978a07fb7d10f37ef2c1a5fd3"} Dec 15 05:55:23 crc kubenswrapper[4747]: I1215 05:55:23.429073 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7d859c86-54f1-459b-82a5-1ed6739f42f9","Type":"ContainerStarted","Data":"07849fae5cc907ad51d4ecfa41d5a93f4847fc50438a34a33dde6562608679bd"} Dec 15 05:55:23 crc kubenswrapper[4747]: I1215 05:55:23.432973 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3af61ae2-3917-4f5c-b443-e2b28d633424","Type":"ContainerStarted","Data":"b66a330f0fa67e6b67b93f361cb1f0919a3b3897f78ce25c66ddfbd2530e07aa"} Dec 15 05:55:24 crc kubenswrapper[4747]: I1215 05:55:24.450756 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7d859c86-54f1-459b-82a5-1ed6739f42f9","Type":"ContainerStarted","Data":"f9db4507209d74143eb71c496fbb5deb37b9220bf3eca55cb9161b6c3e36ec4e"} Dec 15 05:55:24 crc kubenswrapper[4747]: I1215 05:55:24.474783 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.474751746 podStartE2EDuration="2.474751746s" podCreationTimestamp="2025-12-15 05:55:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:55:24.466645925 +0000 UTC m=+1088.163157842" watchObservedRunningTime="2025-12-15 05:55:24.474751746 +0000 UTC m=+1088.171263663" Dec 15 05:55:25 crc kubenswrapper[4747]: I1215 05:55:25.465251 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3af61ae2-3917-4f5c-b443-e2b28d633424","Type":"ContainerStarted","Data":"3dfb1e17e00abdf02c3d061163e3b847f212d24ef99dfb40310a0ff0fab46027"} Dec 15 05:55:25 crc kubenswrapper[4747]: I1215 05:55:25.465679 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3af61ae2-3917-4f5c-b443-e2b28d633424" containerName="sg-core" containerID="cri-o://b66a330f0fa67e6b67b93f361cb1f0919a3b3897f78ce25c66ddfbd2530e07aa" gracePeriod=30 Dec 15 05:55:25 crc kubenswrapper[4747]: I1215 05:55:25.465546 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3af61ae2-3917-4f5c-b443-e2b28d633424" containerName="ceilometer-central-agent" containerID="cri-o://e779a5c8b92578d18727b03984d92b2d45e4422f99506e480307fd18f836945e" gracePeriod=30 Dec 15 05:55:25 crc kubenswrapper[4747]: I1215 05:55:25.465744 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3af61ae2-3917-4f5c-b443-e2b28d633424" containerName="ceilometer-notification-agent" containerID="cri-o://a6f404c7383573b683f00b082ee87a8acd01a4b6be8bd67a19fa8b7184ccca4f" gracePeriod=30 Dec 15 05:55:25 crc kubenswrapper[4747]: I1215 05:55:25.467023 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3af61ae2-3917-4f5c-b443-e2b28d633424" containerName="proxy-httpd" containerID="cri-o://3dfb1e17e00abdf02c3d061163e3b847f212d24ef99dfb40310a0ff0fab46027" gracePeriod=30 Dec 15 05:55:25 crc kubenswrapper[4747]: I1215 05:55:25.489521 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.062442793 podStartE2EDuration="6.489505233s" podCreationTimestamp="2025-12-15 05:55:19 +0000 UTC" firstStartedPulling="2025-12-15 05:55:20.189437825 +0000 UTC m=+1083.885949742" lastFinishedPulling="2025-12-15 05:55:24.616500274 +0000 UTC m=+1088.313012182" observedRunningTime="2025-12-15 05:55:25.483829501 +0000 UTC m=+1089.180341418" watchObservedRunningTime="2025-12-15 05:55:25.489505233 +0000 UTC m=+1089.186017150" Dec 15 05:55:25 crc kubenswrapper[4747]: I1215 05:55:25.821010 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-864557ccdf-8grfz" Dec 15 05:55:25 crc kubenswrapper[4747]: I1215 05:55:25.888100 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78556b4b47-jdvwf"] Dec 15 05:55:25 crc kubenswrapper[4747]: I1215 05:55:25.888348 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-78556b4b47-jdvwf" podUID="d306a755-b503-4799-b23b-05b1afe561eb" containerName="dnsmasq-dns" containerID="cri-o://a08da45d3be84d6cae5e5d6b0a9fa865e095b6cc5f5001b4ccd2c076007f6fbb" gracePeriod=10 Dec 15 05:55:26 crc kubenswrapper[4747]: I1215 05:55:26.264609 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78556b4b47-jdvwf" Dec 15 05:55:26 crc kubenswrapper[4747]: I1215 05:55:26.370841 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d306a755-b503-4799-b23b-05b1afe561eb-ovsdbserver-sb\") pod \"d306a755-b503-4799-b23b-05b1afe561eb\" (UID: \"d306a755-b503-4799-b23b-05b1afe561eb\") " Dec 15 05:55:26 crc kubenswrapper[4747]: I1215 05:55:26.370915 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d306a755-b503-4799-b23b-05b1afe561eb-ovsdbserver-nb\") pod \"d306a755-b503-4799-b23b-05b1afe561eb\" (UID: \"d306a755-b503-4799-b23b-05b1afe561eb\") " Dec 15 05:55:26 crc kubenswrapper[4747]: I1215 05:55:26.370982 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d306a755-b503-4799-b23b-05b1afe561eb-config\") pod \"d306a755-b503-4799-b23b-05b1afe561eb\" (UID: \"d306a755-b503-4799-b23b-05b1afe561eb\") " Dec 15 05:55:26 crc kubenswrapper[4747]: I1215 05:55:26.371012 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d306a755-b503-4799-b23b-05b1afe561eb-dns-svc\") pod \"d306a755-b503-4799-b23b-05b1afe561eb\" (UID: \"d306a755-b503-4799-b23b-05b1afe561eb\") " Dec 15 05:55:26 crc kubenswrapper[4747]: I1215 05:55:26.371265 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxmq8\" (UniqueName: \"kubernetes.io/projected/d306a755-b503-4799-b23b-05b1afe561eb-kube-api-access-hxmq8\") pod \"d306a755-b503-4799-b23b-05b1afe561eb\" (UID: \"d306a755-b503-4799-b23b-05b1afe561eb\") " Dec 15 05:55:26 crc kubenswrapper[4747]: I1215 05:55:26.371292 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d306a755-b503-4799-b23b-05b1afe561eb-dns-swift-storage-0\") pod \"d306a755-b503-4799-b23b-05b1afe561eb\" (UID: \"d306a755-b503-4799-b23b-05b1afe561eb\") " Dec 15 05:55:26 crc kubenswrapper[4747]: I1215 05:55:26.378045 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d306a755-b503-4799-b23b-05b1afe561eb-kube-api-access-hxmq8" (OuterVolumeSpecName: "kube-api-access-hxmq8") pod "d306a755-b503-4799-b23b-05b1afe561eb" (UID: "d306a755-b503-4799-b23b-05b1afe561eb"). InnerVolumeSpecName "kube-api-access-hxmq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:55:26 crc kubenswrapper[4747]: I1215 05:55:26.411964 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d306a755-b503-4799-b23b-05b1afe561eb-config" (OuterVolumeSpecName: "config") pod "d306a755-b503-4799-b23b-05b1afe561eb" (UID: "d306a755-b503-4799-b23b-05b1afe561eb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:55:26 crc kubenswrapper[4747]: I1215 05:55:26.416216 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d306a755-b503-4799-b23b-05b1afe561eb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d306a755-b503-4799-b23b-05b1afe561eb" (UID: "d306a755-b503-4799-b23b-05b1afe561eb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:55:26 crc kubenswrapper[4747]: I1215 05:55:26.416654 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d306a755-b503-4799-b23b-05b1afe561eb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d306a755-b503-4799-b23b-05b1afe561eb" (UID: "d306a755-b503-4799-b23b-05b1afe561eb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:55:26 crc kubenswrapper[4747]: I1215 05:55:26.423290 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d306a755-b503-4799-b23b-05b1afe561eb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d306a755-b503-4799-b23b-05b1afe561eb" (UID: "d306a755-b503-4799-b23b-05b1afe561eb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:55:26 crc kubenswrapper[4747]: I1215 05:55:26.427227 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d306a755-b503-4799-b23b-05b1afe561eb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d306a755-b503-4799-b23b-05b1afe561eb" (UID: "d306a755-b503-4799-b23b-05b1afe561eb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:55:26 crc kubenswrapper[4747]: I1215 05:55:26.475146 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxmq8\" (UniqueName: \"kubernetes.io/projected/d306a755-b503-4799-b23b-05b1afe561eb-kube-api-access-hxmq8\") on node \"crc\" DevicePath \"\"" Dec 15 05:55:26 crc kubenswrapper[4747]: I1215 05:55:26.475181 4747 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d306a755-b503-4799-b23b-05b1afe561eb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 15 05:55:26 crc kubenswrapper[4747]: I1215 05:55:26.475190 4747 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d306a755-b503-4799-b23b-05b1afe561eb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 15 05:55:26 crc kubenswrapper[4747]: I1215 05:55:26.475199 4747 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d306a755-b503-4799-b23b-05b1afe561eb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 15 05:55:26 crc kubenswrapper[4747]: I1215 05:55:26.475214 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d306a755-b503-4799-b23b-05b1afe561eb-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:55:26 crc kubenswrapper[4747]: I1215 05:55:26.475224 4747 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d306a755-b503-4799-b23b-05b1afe561eb-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 15 05:55:26 crc kubenswrapper[4747]: I1215 05:55:26.476112 4747 generic.go:334] "Generic (PLEG): container finished" podID="3af61ae2-3917-4f5c-b443-e2b28d633424" containerID="3dfb1e17e00abdf02c3d061163e3b847f212d24ef99dfb40310a0ff0fab46027" exitCode=0 Dec 15 05:55:26 crc kubenswrapper[4747]: I1215 05:55:26.476143 4747 generic.go:334] "Generic (PLEG): container finished" podID="3af61ae2-3917-4f5c-b443-e2b28d633424" containerID="b66a330f0fa67e6b67b93f361cb1f0919a3b3897f78ce25c66ddfbd2530e07aa" exitCode=2 Dec 15 05:55:26 crc kubenswrapper[4747]: I1215 05:55:26.476152 4747 generic.go:334] "Generic (PLEG): container finished" podID="3af61ae2-3917-4f5c-b443-e2b28d633424" containerID="a6f404c7383573b683f00b082ee87a8acd01a4b6be8bd67a19fa8b7184ccca4f" exitCode=0 Dec 15 05:55:26 crc kubenswrapper[4747]: I1215 05:55:26.476200 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3af61ae2-3917-4f5c-b443-e2b28d633424","Type":"ContainerDied","Data":"3dfb1e17e00abdf02c3d061163e3b847f212d24ef99dfb40310a0ff0fab46027"} Dec 15 05:55:26 crc kubenswrapper[4747]: I1215 05:55:26.476248 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3af61ae2-3917-4f5c-b443-e2b28d633424","Type":"ContainerDied","Data":"b66a330f0fa67e6b67b93f361cb1f0919a3b3897f78ce25c66ddfbd2530e07aa"} Dec 15 05:55:26 crc kubenswrapper[4747]: I1215 05:55:26.476259 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3af61ae2-3917-4f5c-b443-e2b28d633424","Type":"ContainerDied","Data":"a6f404c7383573b683f00b082ee87a8acd01a4b6be8bd67a19fa8b7184ccca4f"} Dec 15 05:55:26 crc kubenswrapper[4747]: I1215 05:55:26.478359 4747 generic.go:334] "Generic (PLEG): container finished" podID="d306a755-b503-4799-b23b-05b1afe561eb" containerID="a08da45d3be84d6cae5e5d6b0a9fa865e095b6cc5f5001b4ccd2c076007f6fbb" exitCode=0 Dec 15 05:55:26 crc kubenswrapper[4747]: I1215 05:55:26.478380 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78556b4b47-jdvwf" event={"ID":"d306a755-b503-4799-b23b-05b1afe561eb","Type":"ContainerDied","Data":"a08da45d3be84d6cae5e5d6b0a9fa865e095b6cc5f5001b4ccd2c076007f6fbb"} Dec 15 05:55:26 crc kubenswrapper[4747]: I1215 05:55:26.478400 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78556b4b47-jdvwf" event={"ID":"d306a755-b503-4799-b23b-05b1afe561eb","Type":"ContainerDied","Data":"6ff07bfbad3b777e62a33a9a17790054fff6281cf300040031c0593a5062e743"} Dec 15 05:55:26 crc kubenswrapper[4747]: I1215 05:55:26.478421 4747 scope.go:117] "RemoveContainer" containerID="a08da45d3be84d6cae5e5d6b0a9fa865e095b6cc5f5001b4ccd2c076007f6fbb" Dec 15 05:55:26 crc kubenswrapper[4747]: I1215 05:55:26.478575 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78556b4b47-jdvwf" Dec 15 05:55:26 crc kubenswrapper[4747]: I1215 05:55:26.512635 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78556b4b47-jdvwf"] Dec 15 05:55:26 crc kubenswrapper[4747]: I1215 05:55:26.515073 4747 scope.go:117] "RemoveContainer" containerID="f3301224d1e44581d89a204b56b74b2f339dc1b29dbc9669ce366463d55391d3" Dec 15 05:55:26 crc kubenswrapper[4747]: I1215 05:55:26.521993 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78556b4b47-jdvwf"] Dec 15 05:55:26 crc kubenswrapper[4747]: I1215 05:55:26.536254 4747 scope.go:117] "RemoveContainer" containerID="a08da45d3be84d6cae5e5d6b0a9fa865e095b6cc5f5001b4ccd2c076007f6fbb" Dec 15 05:55:26 crc kubenswrapper[4747]: E1215 05:55:26.536509 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a08da45d3be84d6cae5e5d6b0a9fa865e095b6cc5f5001b4ccd2c076007f6fbb\": container with ID starting with a08da45d3be84d6cae5e5d6b0a9fa865e095b6cc5f5001b4ccd2c076007f6fbb not found: ID does not exist" containerID="a08da45d3be84d6cae5e5d6b0a9fa865e095b6cc5f5001b4ccd2c076007f6fbb" Dec 15 05:55:26 crc kubenswrapper[4747]: I1215 05:55:26.536532 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a08da45d3be84d6cae5e5d6b0a9fa865e095b6cc5f5001b4ccd2c076007f6fbb"} err="failed to get container status \"a08da45d3be84d6cae5e5d6b0a9fa865e095b6cc5f5001b4ccd2c076007f6fbb\": rpc error: code = NotFound desc = could not find container \"a08da45d3be84d6cae5e5d6b0a9fa865e095b6cc5f5001b4ccd2c076007f6fbb\": container with ID starting with a08da45d3be84d6cae5e5d6b0a9fa865e095b6cc5f5001b4ccd2c076007f6fbb not found: ID does not exist" Dec 15 05:55:26 crc kubenswrapper[4747]: I1215 05:55:26.536550 4747 scope.go:117] "RemoveContainer" containerID="f3301224d1e44581d89a204b56b74b2f339dc1b29dbc9669ce366463d55391d3" Dec 15 05:55:26 crc kubenswrapper[4747]: E1215 05:55:26.536824 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3301224d1e44581d89a204b56b74b2f339dc1b29dbc9669ce366463d55391d3\": container with ID starting with f3301224d1e44581d89a204b56b74b2f339dc1b29dbc9669ce366463d55391d3 not found: ID does not exist" containerID="f3301224d1e44581d89a204b56b74b2f339dc1b29dbc9669ce366463d55391d3" Dec 15 05:55:26 crc kubenswrapper[4747]: I1215 05:55:26.536841 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3301224d1e44581d89a204b56b74b2f339dc1b29dbc9669ce366463d55391d3"} err="failed to get container status \"f3301224d1e44581d89a204b56b74b2f339dc1b29dbc9669ce366463d55391d3\": rpc error: code = NotFound desc = could not find container \"f3301224d1e44581d89a204b56b74b2f339dc1b29dbc9669ce366463d55391d3\": container with ID starting with f3301224d1e44581d89a204b56b74b2f339dc1b29dbc9669ce366463d55391d3 not found: ID does not exist" Dec 15 05:55:26 crc kubenswrapper[4747]: I1215 05:55:26.640356 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d306a755-b503-4799-b23b-05b1afe561eb" path="/var/lib/kubelet/pods/d306a755-b503-4799-b23b-05b1afe561eb/volumes" Dec 15 05:55:30 crc kubenswrapper[4747]: I1215 05:55:30.531739 4747 generic.go:334] "Generic (PLEG): container finished" podID="3af61ae2-3917-4f5c-b443-e2b28d633424" containerID="e779a5c8b92578d18727b03984d92b2d45e4422f99506e480307fd18f836945e" exitCode=0 Dec 15 05:55:30 crc kubenswrapper[4747]: I1215 05:55:30.531849 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3af61ae2-3917-4f5c-b443-e2b28d633424","Type":"ContainerDied","Data":"e779a5c8b92578d18727b03984d92b2d45e4422f99506e480307fd18f836945e"} Dec 15 05:55:30 crc kubenswrapper[4747]: I1215 05:55:30.683997 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 15 05:55:30 crc kubenswrapper[4747]: I1215 05:55:30.860649 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3af61ae2-3917-4f5c-b443-e2b28d633424-log-httpd\") pod \"3af61ae2-3917-4f5c-b443-e2b28d633424\" (UID: \"3af61ae2-3917-4f5c-b443-e2b28d633424\") " Dec 15 05:55:30 crc kubenswrapper[4747]: I1215 05:55:30.860858 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfz6b\" (UniqueName: \"kubernetes.io/projected/3af61ae2-3917-4f5c-b443-e2b28d633424-kube-api-access-pfz6b\") pod \"3af61ae2-3917-4f5c-b443-e2b28d633424\" (UID: \"3af61ae2-3917-4f5c-b443-e2b28d633424\") " Dec 15 05:55:30 crc kubenswrapper[4747]: I1215 05:55:30.860980 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3af61ae2-3917-4f5c-b443-e2b28d633424-ceilometer-tls-certs\") pod \"3af61ae2-3917-4f5c-b443-e2b28d633424\" (UID: \"3af61ae2-3917-4f5c-b443-e2b28d633424\") " Dec 15 05:55:30 crc kubenswrapper[4747]: I1215 05:55:30.861016 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3af61ae2-3917-4f5c-b443-e2b28d633424-scripts\") pod \"3af61ae2-3917-4f5c-b443-e2b28d633424\" (UID: \"3af61ae2-3917-4f5c-b443-e2b28d633424\") " Dec 15 05:55:30 crc kubenswrapper[4747]: I1215 05:55:30.861130 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3af61ae2-3917-4f5c-b443-e2b28d633424-config-data\") pod \"3af61ae2-3917-4f5c-b443-e2b28d633424\" (UID: \"3af61ae2-3917-4f5c-b443-e2b28d633424\") " Dec 15 05:55:30 crc kubenswrapper[4747]: I1215 05:55:30.861155 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3af61ae2-3917-4f5c-b443-e2b28d633424-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3af61ae2-3917-4f5c-b443-e2b28d633424" (UID: "3af61ae2-3917-4f5c-b443-e2b28d633424"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:55:30 crc kubenswrapper[4747]: I1215 05:55:30.861256 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3af61ae2-3917-4f5c-b443-e2b28d633424-run-httpd\") pod \"3af61ae2-3917-4f5c-b443-e2b28d633424\" (UID: \"3af61ae2-3917-4f5c-b443-e2b28d633424\") " Dec 15 05:55:30 crc kubenswrapper[4747]: I1215 05:55:30.861308 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3af61ae2-3917-4f5c-b443-e2b28d633424-sg-core-conf-yaml\") pod \"3af61ae2-3917-4f5c-b443-e2b28d633424\" (UID: \"3af61ae2-3917-4f5c-b443-e2b28d633424\") " Dec 15 05:55:30 crc kubenswrapper[4747]: I1215 05:55:30.861340 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3af61ae2-3917-4f5c-b443-e2b28d633424-combined-ca-bundle\") pod \"3af61ae2-3917-4f5c-b443-e2b28d633424\" (UID: \"3af61ae2-3917-4f5c-b443-e2b28d633424\") " Dec 15 05:55:30 crc kubenswrapper[4747]: I1215 05:55:30.861808 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3af61ae2-3917-4f5c-b443-e2b28d633424-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3af61ae2-3917-4f5c-b443-e2b28d633424" (UID: "3af61ae2-3917-4f5c-b443-e2b28d633424"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:55:30 crc kubenswrapper[4747]: I1215 05:55:30.862785 4747 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3af61ae2-3917-4f5c-b443-e2b28d633424-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 15 05:55:30 crc kubenswrapper[4747]: I1215 05:55:30.862804 4747 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3af61ae2-3917-4f5c-b443-e2b28d633424-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 15 05:55:30 crc kubenswrapper[4747]: I1215 05:55:30.868741 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3af61ae2-3917-4f5c-b443-e2b28d633424-scripts" (OuterVolumeSpecName: "scripts") pod "3af61ae2-3917-4f5c-b443-e2b28d633424" (UID: "3af61ae2-3917-4f5c-b443-e2b28d633424"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:55:30 crc kubenswrapper[4747]: I1215 05:55:30.868960 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3af61ae2-3917-4f5c-b443-e2b28d633424-kube-api-access-pfz6b" (OuterVolumeSpecName: "kube-api-access-pfz6b") pod "3af61ae2-3917-4f5c-b443-e2b28d633424" (UID: "3af61ae2-3917-4f5c-b443-e2b28d633424"). InnerVolumeSpecName "kube-api-access-pfz6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:55:30 crc kubenswrapper[4747]: I1215 05:55:30.887483 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3af61ae2-3917-4f5c-b443-e2b28d633424-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3af61ae2-3917-4f5c-b443-e2b28d633424" (UID: "3af61ae2-3917-4f5c-b443-e2b28d633424"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:55:30 crc kubenswrapper[4747]: I1215 05:55:30.907787 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3af61ae2-3917-4f5c-b443-e2b28d633424-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "3af61ae2-3917-4f5c-b443-e2b28d633424" (UID: "3af61ae2-3917-4f5c-b443-e2b28d633424"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:55:30 crc kubenswrapper[4747]: I1215 05:55:30.917248 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3af61ae2-3917-4f5c-b443-e2b28d633424-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3af61ae2-3917-4f5c-b443-e2b28d633424" (UID: "3af61ae2-3917-4f5c-b443-e2b28d633424"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:55:30 crc kubenswrapper[4747]: I1215 05:55:30.932401 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3af61ae2-3917-4f5c-b443-e2b28d633424-config-data" (OuterVolumeSpecName: "config-data") pod "3af61ae2-3917-4f5c-b443-e2b28d633424" (UID: "3af61ae2-3917-4f5c-b443-e2b28d633424"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:55:30 crc kubenswrapper[4747]: I1215 05:55:30.965030 4747 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3af61ae2-3917-4f5c-b443-e2b28d633424-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 15 05:55:30 crc kubenswrapper[4747]: I1215 05:55:30.965060 4747 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3af61ae2-3917-4f5c-b443-e2b28d633424-scripts\") on node \"crc\" DevicePath \"\"" Dec 15 05:55:30 crc kubenswrapper[4747]: I1215 05:55:30.965072 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3af61ae2-3917-4f5c-b443-e2b28d633424-config-data\") on node \"crc\" DevicePath \"\"" Dec 15 05:55:30 crc kubenswrapper[4747]: I1215 05:55:30.965083 4747 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3af61ae2-3917-4f5c-b443-e2b28d633424-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 15 05:55:30 crc kubenswrapper[4747]: I1215 05:55:30.965093 4747 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3af61ae2-3917-4f5c-b443-e2b28d633424-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:55:30 crc kubenswrapper[4747]: I1215 05:55:30.965103 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfz6b\" (UniqueName: \"kubernetes.io/projected/3af61ae2-3917-4f5c-b443-e2b28d633424-kube-api-access-pfz6b\") on node \"crc\" DevicePath \"\"" Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.545616 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3af61ae2-3917-4f5c-b443-e2b28d633424","Type":"ContainerDied","Data":"e34b335073599b030c0bf712ef1023291a1aa254b3b96437d1f49372f0374cec"} Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.545696 4747 scope.go:117] "RemoveContainer" containerID="3dfb1e17e00abdf02c3d061163e3b847f212d24ef99dfb40310a0ff0fab46027" Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.545710 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.570328 4747 scope.go:117] "RemoveContainer" containerID="b66a330f0fa67e6b67b93f361cb1f0919a3b3897f78ce25c66ddfbd2530e07aa" Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.585049 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.591836 4747 scope.go:117] "RemoveContainer" containerID="a6f404c7383573b683f00b082ee87a8acd01a4b6be8bd67a19fa8b7184ccca4f" Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.593076 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.615736 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 15 05:55:31 crc kubenswrapper[4747]: E1215 05:55:31.616193 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d306a755-b503-4799-b23b-05b1afe561eb" containerName="dnsmasq-dns" Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.616215 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="d306a755-b503-4799-b23b-05b1afe561eb" containerName="dnsmasq-dns" Dec 15 05:55:31 crc kubenswrapper[4747]: E1215 05:55:31.616241 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d306a755-b503-4799-b23b-05b1afe561eb" containerName="init" Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.616248 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="d306a755-b503-4799-b23b-05b1afe561eb" containerName="init" Dec 15 05:55:31 crc kubenswrapper[4747]: E1215 05:55:31.616264 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3af61ae2-3917-4f5c-b443-e2b28d633424" containerName="ceilometer-notification-agent" Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.616271 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="3af61ae2-3917-4f5c-b443-e2b28d633424" containerName="ceilometer-notification-agent" Dec 15 05:55:31 crc kubenswrapper[4747]: E1215 05:55:31.616285 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3af61ae2-3917-4f5c-b443-e2b28d633424" containerName="sg-core" Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.616291 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="3af61ae2-3917-4f5c-b443-e2b28d633424" containerName="sg-core" Dec 15 05:55:31 crc kubenswrapper[4747]: E1215 05:55:31.616303 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3af61ae2-3917-4f5c-b443-e2b28d633424" containerName="proxy-httpd" Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.616311 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="3af61ae2-3917-4f5c-b443-e2b28d633424" containerName="proxy-httpd" Dec 15 05:55:31 crc kubenswrapper[4747]: E1215 05:55:31.616333 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3af61ae2-3917-4f5c-b443-e2b28d633424" containerName="ceilometer-central-agent" Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.616340 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="3af61ae2-3917-4f5c-b443-e2b28d633424" containerName="ceilometer-central-agent" Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.616520 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="d306a755-b503-4799-b23b-05b1afe561eb" containerName="dnsmasq-dns" Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.616537 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="3af61ae2-3917-4f5c-b443-e2b28d633424" containerName="ceilometer-central-agent" Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.616554 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="3af61ae2-3917-4f5c-b443-e2b28d633424" containerName="sg-core" Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.616564 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="3af61ae2-3917-4f5c-b443-e2b28d633424" containerName="proxy-httpd" Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.616578 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="3af61ae2-3917-4f5c-b443-e2b28d633424" containerName="ceilometer-notification-agent" Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.618303 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.619661 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.620371 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.623339 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.623858 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.639918 4747 scope.go:117] "RemoveContainer" containerID="e779a5c8b92578d18727b03984d92b2d45e4422f99506e480307fd18f836945e" Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.782996 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/34eac981-4ed2-4654-b4b0-f52ac5c7aeda-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"34eac981-4ed2-4654-b4b0-f52ac5c7aeda\") " pod="openstack/ceilometer-0" Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.783185 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34eac981-4ed2-4654-b4b0-f52ac5c7aeda-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"34eac981-4ed2-4654-b4b0-f52ac5c7aeda\") " pod="openstack/ceilometer-0" Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.783376 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34eac981-4ed2-4654-b4b0-f52ac5c7aeda-config-data\") pod \"ceilometer-0\" (UID: \"34eac981-4ed2-4654-b4b0-f52ac5c7aeda\") " pod="openstack/ceilometer-0" Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.783475 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34eac981-4ed2-4654-b4b0-f52ac5c7aeda-scripts\") pod \"ceilometer-0\" (UID: \"34eac981-4ed2-4654-b4b0-f52ac5c7aeda\") " pod="openstack/ceilometer-0" Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.783623 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/34eac981-4ed2-4654-b4b0-f52ac5c7aeda-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"34eac981-4ed2-4654-b4b0-f52ac5c7aeda\") " pod="openstack/ceilometer-0" Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.783868 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34eac981-4ed2-4654-b4b0-f52ac5c7aeda-log-httpd\") pod \"ceilometer-0\" (UID: \"34eac981-4ed2-4654-b4b0-f52ac5c7aeda\") " pod="openstack/ceilometer-0" Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.783901 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjnrj\" (UniqueName: \"kubernetes.io/projected/34eac981-4ed2-4654-b4b0-f52ac5c7aeda-kube-api-access-xjnrj\") pod \"ceilometer-0\" (UID: \"34eac981-4ed2-4654-b4b0-f52ac5c7aeda\") " pod="openstack/ceilometer-0" Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.784044 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34eac981-4ed2-4654-b4b0-f52ac5c7aeda-run-httpd\") pod \"ceilometer-0\" (UID: \"34eac981-4ed2-4654-b4b0-f52ac5c7aeda\") " pod="openstack/ceilometer-0" Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.886426 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjnrj\" (UniqueName: \"kubernetes.io/projected/34eac981-4ed2-4654-b4b0-f52ac5c7aeda-kube-api-access-xjnrj\") pod \"ceilometer-0\" (UID: \"34eac981-4ed2-4654-b4b0-f52ac5c7aeda\") " pod="openstack/ceilometer-0" Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.886493 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34eac981-4ed2-4654-b4b0-f52ac5c7aeda-run-httpd\") pod \"ceilometer-0\" (UID: \"34eac981-4ed2-4654-b4b0-f52ac5c7aeda\") " pod="openstack/ceilometer-0" Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.886533 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/34eac981-4ed2-4654-b4b0-f52ac5c7aeda-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"34eac981-4ed2-4654-b4b0-f52ac5c7aeda\") " pod="openstack/ceilometer-0" Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.886590 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34eac981-4ed2-4654-b4b0-f52ac5c7aeda-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"34eac981-4ed2-4654-b4b0-f52ac5c7aeda\") " pod="openstack/ceilometer-0" Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.886622 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34eac981-4ed2-4654-b4b0-f52ac5c7aeda-config-data\") pod \"ceilometer-0\" (UID: \"34eac981-4ed2-4654-b4b0-f52ac5c7aeda\") " pod="openstack/ceilometer-0" Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.886651 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34eac981-4ed2-4654-b4b0-f52ac5c7aeda-scripts\") pod \"ceilometer-0\" (UID: \"34eac981-4ed2-4654-b4b0-f52ac5c7aeda\") " pod="openstack/ceilometer-0" Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.886697 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/34eac981-4ed2-4654-b4b0-f52ac5c7aeda-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"34eac981-4ed2-4654-b4b0-f52ac5c7aeda\") " pod="openstack/ceilometer-0" Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.886752 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34eac981-4ed2-4654-b4b0-f52ac5c7aeda-log-httpd\") pod \"ceilometer-0\" (UID: \"34eac981-4ed2-4654-b4b0-f52ac5c7aeda\") " pod="openstack/ceilometer-0" Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.887267 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34eac981-4ed2-4654-b4b0-f52ac5c7aeda-log-httpd\") pod \"ceilometer-0\" (UID: \"34eac981-4ed2-4654-b4b0-f52ac5c7aeda\") " pod="openstack/ceilometer-0" Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.887685 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34eac981-4ed2-4654-b4b0-f52ac5c7aeda-run-httpd\") pod \"ceilometer-0\" (UID: \"34eac981-4ed2-4654-b4b0-f52ac5c7aeda\") " pod="openstack/ceilometer-0" Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.891607 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/34eac981-4ed2-4654-b4b0-f52ac5c7aeda-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"34eac981-4ed2-4654-b4b0-f52ac5c7aeda\") " pod="openstack/ceilometer-0" Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.892253 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/34eac981-4ed2-4654-b4b0-f52ac5c7aeda-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"34eac981-4ed2-4654-b4b0-f52ac5c7aeda\") " pod="openstack/ceilometer-0" Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.892356 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34eac981-4ed2-4654-b4b0-f52ac5c7aeda-scripts\") pod \"ceilometer-0\" (UID: \"34eac981-4ed2-4654-b4b0-f52ac5c7aeda\") " pod="openstack/ceilometer-0" Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.892888 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34eac981-4ed2-4654-b4b0-f52ac5c7aeda-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"34eac981-4ed2-4654-b4b0-f52ac5c7aeda\") " pod="openstack/ceilometer-0" Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.898119 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34eac981-4ed2-4654-b4b0-f52ac5c7aeda-config-data\") pod \"ceilometer-0\" (UID: \"34eac981-4ed2-4654-b4b0-f52ac5c7aeda\") " pod="openstack/ceilometer-0" Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.903193 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjnrj\" (UniqueName: \"kubernetes.io/projected/34eac981-4ed2-4654-b4b0-f52ac5c7aeda-kube-api-access-xjnrj\") pod \"ceilometer-0\" (UID: \"34eac981-4ed2-4654-b4b0-f52ac5c7aeda\") " pod="openstack/ceilometer-0" Dec 15 05:55:31 crc kubenswrapper[4747]: I1215 05:55:31.939250 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 15 05:55:32 crc kubenswrapper[4747]: I1215 05:55:32.366835 4747 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 15 05:55:32 crc kubenswrapper[4747]: I1215 05:55:32.368047 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 15 05:55:32 crc kubenswrapper[4747]: I1215 05:55:32.560269 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34eac981-4ed2-4654-b4b0-f52ac5c7aeda","Type":"ContainerStarted","Data":"9f52997cae6564922b2bf6fa44f017f311635e961df0d70f7d0ffd2a64b471e6"} Dec 15 05:55:32 crc kubenswrapper[4747]: I1215 05:55:32.641693 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3af61ae2-3917-4f5c-b443-e2b28d633424" path="/var/lib/kubelet/pods/3af61ae2-3917-4f5c-b443-e2b28d633424/volumes" Dec 15 05:55:32 crc kubenswrapper[4747]: I1215 05:55:32.793950 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 15 05:55:32 crc kubenswrapper[4747]: I1215 05:55:32.794387 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 15 05:55:33 crc kubenswrapper[4747]: I1215 05:55:33.569445 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34eac981-4ed2-4654-b4b0-f52ac5c7aeda","Type":"ContainerStarted","Data":"06dec22061a2959a3429e4139577d7d1bc161e6f1b2eb7ff01bc59180259c847"} Dec 15 05:55:33 crc kubenswrapper[4747]: I1215 05:55:33.808062 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7d859c86-54f1-459b-82a5-1ed6739f42f9" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.197:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 15 05:55:33 crc kubenswrapper[4747]: I1215 05:55:33.808086 4747 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7d859c86-54f1-459b-82a5-1ed6739f42f9" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 15 05:55:34 crc kubenswrapper[4747]: I1215 05:55:34.581778 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34eac981-4ed2-4654-b4b0-f52ac5c7aeda","Type":"ContainerStarted","Data":"d7dc83f83e7fdcf5795ba954f7d8f45101121d988793cc0955c15404b05578fd"} Dec 15 05:55:35 crc kubenswrapper[4747]: I1215 05:55:35.598268 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34eac981-4ed2-4654-b4b0-f52ac5c7aeda","Type":"ContainerStarted","Data":"6bf2b2fd568bae65128887af3eb34fc10971e19329260cdc6bece59cc40ac59a"} Dec 15 05:55:36 crc kubenswrapper[4747]: I1215 05:55:36.610318 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34eac981-4ed2-4654-b4b0-f52ac5c7aeda","Type":"ContainerStarted","Data":"b82ab33a9a116c05c23103bbd5c0cae55e7a3ebe8b49746ddb75a0eba22986b7"} Dec 15 05:55:36 crc kubenswrapper[4747]: I1215 05:55:36.610645 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 15 05:55:42 crc kubenswrapper[4747]: I1215 05:55:42.802452 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 15 05:55:42 crc kubenswrapper[4747]: I1215 05:55:42.803631 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 15 05:55:42 crc kubenswrapper[4747]: I1215 05:55:42.803859 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 15 05:55:42 crc kubenswrapper[4747]: I1215 05:55:42.809280 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 15 05:55:42 crc kubenswrapper[4747]: I1215 05:55:42.826918 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=7.835485903 podStartE2EDuration="11.826905177s" podCreationTimestamp="2025-12-15 05:55:31 +0000 UTC" firstStartedPulling="2025-12-15 05:55:32.36652461 +0000 UTC m=+1096.063036528" lastFinishedPulling="2025-12-15 05:55:36.357943896 +0000 UTC m=+1100.054455802" observedRunningTime="2025-12-15 05:55:36.628112108 +0000 UTC m=+1100.324624015" watchObservedRunningTime="2025-12-15 05:55:42.826905177 +0000 UTC m=+1106.523417094" Dec 15 05:55:43 crc kubenswrapper[4747]: I1215 05:55:43.679580 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 15 05:55:43 crc kubenswrapper[4747]: I1215 05:55:43.687428 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 15 05:56:01 crc kubenswrapper[4747]: I1215 05:56:01.951143 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 15 05:56:10 crc kubenswrapper[4747]: I1215 05:56:10.386248 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 15 05:56:11 crc kubenswrapper[4747]: I1215 05:56:11.202766 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 15 05:56:14 crc kubenswrapper[4747]: I1215 05:56:14.519239 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="9bece5e6-b345-4969-a563-81fb3706f8f1" containerName="rabbitmq" containerID="cri-o://56e02d43c553bcb63c84e7420195e07acf1694be8e3a4f5410fb5bd736df3cbb" gracePeriod=604796 Dec 15 05:56:15 crc kubenswrapper[4747]: I1215 05:56:15.564516 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="65a53faf-94ad-48f3-b8e0-8642376f89ee" containerName="rabbitmq" containerID="cri-o://ee7a70f0b60728a6b77ed0468330e5d9e58738f0a9b357ea4016c118b3ef4d37" gracePeriod=604796 Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.035316 4747 generic.go:334] "Generic (PLEG): container finished" podID="9bece5e6-b345-4969-a563-81fb3706f8f1" containerID="56e02d43c553bcb63c84e7420195e07acf1694be8e3a4f5410fb5bd736df3cbb" exitCode=0 Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.035895 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9bece5e6-b345-4969-a563-81fb3706f8f1","Type":"ContainerDied","Data":"56e02d43c553bcb63c84e7420195e07acf1694be8e3a4f5410fb5bd736df3cbb"} Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.035973 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9bece5e6-b345-4969-a563-81fb3706f8f1","Type":"ContainerDied","Data":"e32387d719fe96325422479626ab5af7bf174ec9e1dc2028393d6024d87b28ca"} Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.035991 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e32387d719fe96325422479626ab5af7bf174ec9e1dc2028393d6024d87b28ca" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.076875 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.173457 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9bece5e6-b345-4969-a563-81fb3706f8f1-pod-info\") pod \"9bece5e6-b345-4969-a563-81fb3706f8f1\" (UID: \"9bece5e6-b345-4969-a563-81fb3706f8f1\") " Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.173499 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"9bece5e6-b345-4969-a563-81fb3706f8f1\" (UID: \"9bece5e6-b345-4969-a563-81fb3706f8f1\") " Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.173553 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9bece5e6-b345-4969-a563-81fb3706f8f1-erlang-cookie-secret\") pod \"9bece5e6-b345-4969-a563-81fb3706f8f1\" (UID: \"9bece5e6-b345-4969-a563-81fb3706f8f1\") " Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.173587 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9bece5e6-b345-4969-a563-81fb3706f8f1-plugins-conf\") pod \"9bece5e6-b345-4969-a563-81fb3706f8f1\" (UID: \"9bece5e6-b345-4969-a563-81fb3706f8f1\") " Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.173626 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9bece5e6-b345-4969-a563-81fb3706f8f1-rabbitmq-confd\") pod \"9bece5e6-b345-4969-a563-81fb3706f8f1\" (UID: \"9bece5e6-b345-4969-a563-81fb3706f8f1\") " Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.173657 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9bece5e6-b345-4969-a563-81fb3706f8f1-rabbitmq-erlang-cookie\") pod \"9bece5e6-b345-4969-a563-81fb3706f8f1\" (UID: \"9bece5e6-b345-4969-a563-81fb3706f8f1\") " Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.173724 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9bece5e6-b345-4969-a563-81fb3706f8f1-rabbitmq-plugins\") pod \"9bece5e6-b345-4969-a563-81fb3706f8f1\" (UID: \"9bece5e6-b345-4969-a563-81fb3706f8f1\") " Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.173748 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9bece5e6-b345-4969-a563-81fb3706f8f1-rabbitmq-tls\") pod \"9bece5e6-b345-4969-a563-81fb3706f8f1\" (UID: \"9bece5e6-b345-4969-a563-81fb3706f8f1\") " Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.173766 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sc6cq\" (UniqueName: \"kubernetes.io/projected/9bece5e6-b345-4969-a563-81fb3706f8f1-kube-api-access-sc6cq\") pod \"9bece5e6-b345-4969-a563-81fb3706f8f1\" (UID: \"9bece5e6-b345-4969-a563-81fb3706f8f1\") " Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.173805 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9bece5e6-b345-4969-a563-81fb3706f8f1-config-data\") pod \"9bece5e6-b345-4969-a563-81fb3706f8f1\" (UID: \"9bece5e6-b345-4969-a563-81fb3706f8f1\") " Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.173845 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9bece5e6-b345-4969-a563-81fb3706f8f1-server-conf\") pod \"9bece5e6-b345-4969-a563-81fb3706f8f1\" (UID: \"9bece5e6-b345-4969-a563-81fb3706f8f1\") " Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.174883 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bece5e6-b345-4969-a563-81fb3706f8f1-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "9bece5e6-b345-4969-a563-81fb3706f8f1" (UID: "9bece5e6-b345-4969-a563-81fb3706f8f1"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.175559 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bece5e6-b345-4969-a563-81fb3706f8f1-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "9bece5e6-b345-4969-a563-81fb3706f8f1" (UID: "9bece5e6-b345-4969-a563-81fb3706f8f1"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.175855 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bece5e6-b345-4969-a563-81fb3706f8f1-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "9bece5e6-b345-4969-a563-81fb3706f8f1" (UID: "9bece5e6-b345-4969-a563-81fb3706f8f1"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.179870 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bece5e6-b345-4969-a563-81fb3706f8f1-kube-api-access-sc6cq" (OuterVolumeSpecName: "kube-api-access-sc6cq") pod "9bece5e6-b345-4969-a563-81fb3706f8f1" (UID: "9bece5e6-b345-4969-a563-81fb3706f8f1"). InnerVolumeSpecName "kube-api-access-sc6cq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.181807 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/9bece5e6-b345-4969-a563-81fb3706f8f1-pod-info" (OuterVolumeSpecName: "pod-info") pod "9bece5e6-b345-4969-a563-81fb3706f8f1" (UID: "9bece5e6-b345-4969-a563-81fb3706f8f1"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.181952 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bece5e6-b345-4969-a563-81fb3706f8f1-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "9bece5e6-b345-4969-a563-81fb3706f8f1" (UID: "9bece5e6-b345-4969-a563-81fb3706f8f1"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.182282 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bece5e6-b345-4969-a563-81fb3706f8f1-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "9bece5e6-b345-4969-a563-81fb3706f8f1" (UID: "9bece5e6-b345-4969-a563-81fb3706f8f1"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.182401 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "9bece5e6-b345-4969-a563-81fb3706f8f1" (UID: "9bece5e6-b345-4969-a563-81fb3706f8f1"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.209219 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bece5e6-b345-4969-a563-81fb3706f8f1-config-data" (OuterVolumeSpecName: "config-data") pod "9bece5e6-b345-4969-a563-81fb3706f8f1" (UID: "9bece5e6-b345-4969-a563-81fb3706f8f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.228542 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bece5e6-b345-4969-a563-81fb3706f8f1-server-conf" (OuterVolumeSpecName: "server-conf") pod "9bece5e6-b345-4969-a563-81fb3706f8f1" (UID: "9bece5e6-b345-4969-a563-81fb3706f8f1"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.275977 4747 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9bece5e6-b345-4969-a563-81fb3706f8f1-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.276011 4747 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9bece5e6-b345-4969-a563-81fb3706f8f1-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.276025 4747 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9bece5e6-b345-4969-a563-81fb3706f8f1-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.276036 4747 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9bece5e6-b345-4969-a563-81fb3706f8f1-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.276047 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sc6cq\" (UniqueName: \"kubernetes.io/projected/9bece5e6-b345-4969-a563-81fb3706f8f1-kube-api-access-sc6cq\") on node \"crc\" DevicePath \"\"" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.276055 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9bece5e6-b345-4969-a563-81fb3706f8f1-config-data\") on node \"crc\" DevicePath \"\"" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.276063 4747 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9bece5e6-b345-4969-a563-81fb3706f8f1-server-conf\") on node \"crc\" DevicePath \"\"" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.276070 4747 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9bece5e6-b345-4969-a563-81fb3706f8f1-pod-info\") on node \"crc\" DevicePath \"\"" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.276099 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.276110 4747 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9bece5e6-b345-4969-a563-81fb3706f8f1-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.306485 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bece5e6-b345-4969-a563-81fb3706f8f1-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "9bece5e6-b345-4969-a563-81fb3706f8f1" (UID: "9bece5e6-b345-4969-a563-81fb3706f8f1"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.312943 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.343897 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-dd77c7755-47bd5"] Dec 15 05:56:21 crc kubenswrapper[4747]: E1215 05:56:21.344461 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bece5e6-b345-4969-a563-81fb3706f8f1" containerName="rabbitmq" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.344484 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bece5e6-b345-4969-a563-81fb3706f8f1" containerName="rabbitmq" Dec 15 05:56:21 crc kubenswrapper[4747]: E1215 05:56:21.344499 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bece5e6-b345-4969-a563-81fb3706f8f1" containerName="setup-container" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.344506 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bece5e6-b345-4969-a563-81fb3706f8f1" containerName="setup-container" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.344770 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bece5e6-b345-4969-a563-81fb3706f8f1" containerName="rabbitmq" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.346685 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dd77c7755-47bd5" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.357182 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.370493 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dd77c7755-47bd5"] Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.376557 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f54d034b-85e7-412e-9cdf-14bee08e155d-dns-swift-storage-0\") pod \"dnsmasq-dns-dd77c7755-47bd5\" (UID: \"f54d034b-85e7-412e-9cdf-14bee08e155d\") " pod="openstack/dnsmasq-dns-dd77c7755-47bd5" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.376595 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f54d034b-85e7-412e-9cdf-14bee08e155d-ovsdbserver-sb\") pod \"dnsmasq-dns-dd77c7755-47bd5\" (UID: \"f54d034b-85e7-412e-9cdf-14bee08e155d\") " pod="openstack/dnsmasq-dns-dd77c7755-47bd5" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.376629 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f54d034b-85e7-412e-9cdf-14bee08e155d-dns-svc\") pod \"dnsmasq-dns-dd77c7755-47bd5\" (UID: \"f54d034b-85e7-412e-9cdf-14bee08e155d\") " pod="openstack/dnsmasq-dns-dd77c7755-47bd5" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.376707 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f54d034b-85e7-412e-9cdf-14bee08e155d-ovsdbserver-nb\") pod \"dnsmasq-dns-dd77c7755-47bd5\" (UID: \"f54d034b-85e7-412e-9cdf-14bee08e155d\") " pod="openstack/dnsmasq-dns-dd77c7755-47bd5" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.376758 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f54d034b-85e7-412e-9cdf-14bee08e155d-config\") pod \"dnsmasq-dns-dd77c7755-47bd5\" (UID: \"f54d034b-85e7-412e-9cdf-14bee08e155d\") " pod="openstack/dnsmasq-dns-dd77c7755-47bd5" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.376789 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f54d034b-85e7-412e-9cdf-14bee08e155d-openstack-edpm-ipam\") pod \"dnsmasq-dns-dd77c7755-47bd5\" (UID: \"f54d034b-85e7-412e-9cdf-14bee08e155d\") " pod="openstack/dnsmasq-dns-dd77c7755-47bd5" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.376814 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72p9n\" (UniqueName: \"kubernetes.io/projected/f54d034b-85e7-412e-9cdf-14bee08e155d-kube-api-access-72p9n\") pod \"dnsmasq-dns-dd77c7755-47bd5\" (UID: \"f54d034b-85e7-412e-9cdf-14bee08e155d\") " pod="openstack/dnsmasq-dns-dd77c7755-47bd5" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.376870 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.376882 4747 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9bece5e6-b345-4969-a563-81fb3706f8f1-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.479837 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f54d034b-85e7-412e-9cdf-14bee08e155d-ovsdbserver-nb\") pod \"dnsmasq-dns-dd77c7755-47bd5\" (UID: \"f54d034b-85e7-412e-9cdf-14bee08e155d\") " pod="openstack/dnsmasq-dns-dd77c7755-47bd5" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.479988 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f54d034b-85e7-412e-9cdf-14bee08e155d-config\") pod \"dnsmasq-dns-dd77c7755-47bd5\" (UID: \"f54d034b-85e7-412e-9cdf-14bee08e155d\") " pod="openstack/dnsmasq-dns-dd77c7755-47bd5" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.480055 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f54d034b-85e7-412e-9cdf-14bee08e155d-openstack-edpm-ipam\") pod \"dnsmasq-dns-dd77c7755-47bd5\" (UID: \"f54d034b-85e7-412e-9cdf-14bee08e155d\") " pod="openstack/dnsmasq-dns-dd77c7755-47bd5" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.480112 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72p9n\" (UniqueName: \"kubernetes.io/projected/f54d034b-85e7-412e-9cdf-14bee08e155d-kube-api-access-72p9n\") pod \"dnsmasq-dns-dd77c7755-47bd5\" (UID: \"f54d034b-85e7-412e-9cdf-14bee08e155d\") " pod="openstack/dnsmasq-dns-dd77c7755-47bd5" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.480792 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f54d034b-85e7-412e-9cdf-14bee08e155d-dns-swift-storage-0\") pod \"dnsmasq-dns-dd77c7755-47bd5\" (UID: \"f54d034b-85e7-412e-9cdf-14bee08e155d\") " pod="openstack/dnsmasq-dns-dd77c7755-47bd5" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.481653 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f54d034b-85e7-412e-9cdf-14bee08e155d-ovsdbserver-sb\") pod \"dnsmasq-dns-dd77c7755-47bd5\" (UID: \"f54d034b-85e7-412e-9cdf-14bee08e155d\") " pod="openstack/dnsmasq-dns-dd77c7755-47bd5" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.481040 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f54d034b-85e7-412e-9cdf-14bee08e155d-ovsdbserver-nb\") pod \"dnsmasq-dns-dd77c7755-47bd5\" (UID: \"f54d034b-85e7-412e-9cdf-14bee08e155d\") " pod="openstack/dnsmasq-dns-dd77c7755-47bd5" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.481281 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f54d034b-85e7-412e-9cdf-14bee08e155d-openstack-edpm-ipam\") pod \"dnsmasq-dns-dd77c7755-47bd5\" (UID: \"f54d034b-85e7-412e-9cdf-14bee08e155d\") " pod="openstack/dnsmasq-dns-dd77c7755-47bd5" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.481585 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f54d034b-85e7-412e-9cdf-14bee08e155d-dns-swift-storage-0\") pod \"dnsmasq-dns-dd77c7755-47bd5\" (UID: \"f54d034b-85e7-412e-9cdf-14bee08e155d\") " pod="openstack/dnsmasq-dns-dd77c7755-47bd5" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.481153 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f54d034b-85e7-412e-9cdf-14bee08e155d-config\") pod \"dnsmasq-dns-dd77c7755-47bd5\" (UID: \"f54d034b-85e7-412e-9cdf-14bee08e155d\") " pod="openstack/dnsmasq-dns-dd77c7755-47bd5" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.482313 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f54d034b-85e7-412e-9cdf-14bee08e155d-ovsdbserver-sb\") pod \"dnsmasq-dns-dd77c7755-47bd5\" (UID: \"f54d034b-85e7-412e-9cdf-14bee08e155d\") " pod="openstack/dnsmasq-dns-dd77c7755-47bd5" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.482536 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f54d034b-85e7-412e-9cdf-14bee08e155d-dns-svc\") pod \"dnsmasq-dns-dd77c7755-47bd5\" (UID: \"f54d034b-85e7-412e-9cdf-14bee08e155d\") " pod="openstack/dnsmasq-dns-dd77c7755-47bd5" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.483573 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f54d034b-85e7-412e-9cdf-14bee08e155d-dns-svc\") pod \"dnsmasq-dns-dd77c7755-47bd5\" (UID: \"f54d034b-85e7-412e-9cdf-14bee08e155d\") " pod="openstack/dnsmasq-dns-dd77c7755-47bd5" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.491760 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="65a53faf-94ad-48f3-b8e0-8642376f89ee" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.94:5671: connect: connection refused" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.498356 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72p9n\" (UniqueName: \"kubernetes.io/projected/f54d034b-85e7-412e-9cdf-14bee08e155d-kube-api-access-72p9n\") pod \"dnsmasq-dns-dd77c7755-47bd5\" (UID: \"f54d034b-85e7-412e-9cdf-14bee08e155d\") " pod="openstack/dnsmasq-dns-dd77c7755-47bd5" Dec 15 05:56:21 crc kubenswrapper[4747]: I1215 05:56:21.674592 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dd77c7755-47bd5" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.045133 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.045607 4747 generic.go:334] "Generic (PLEG): container finished" podID="65a53faf-94ad-48f3-b8e0-8642376f89ee" containerID="ee7a70f0b60728a6b77ed0468330e5d9e58738f0a9b357ea4016c118b3ef4d37" exitCode=0 Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.045722 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.045768 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"65a53faf-94ad-48f3-b8e0-8642376f89ee","Type":"ContainerDied","Data":"ee7a70f0b60728a6b77ed0468330e5d9e58738f0a9b357ea4016c118b3ef4d37"} Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.045803 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"65a53faf-94ad-48f3-b8e0-8642376f89ee","Type":"ContainerDied","Data":"c67cfbd36f130e3d07c9e6271ca1bf0ab69034153e7c19f48e1ae91464e73b2e"} Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.045822 4747 scope.go:117] "RemoveContainer" containerID="ee7a70f0b60728a6b77ed0468330e5d9e58738f0a9b357ea4016c118b3ef4d37" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.073356 4747 scope.go:117] "RemoveContainer" containerID="8e013d9a657de63787a61a2c6aea79b4254a3e35c2c761dd928d98d5ed13bf52" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.086767 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.095063 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/65a53faf-94ad-48f3-b8e0-8642376f89ee-rabbitmq-tls\") pod \"65a53faf-94ad-48f3-b8e0-8642376f89ee\" (UID: \"65a53faf-94ad-48f3-b8e0-8642376f89ee\") " Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.095108 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qb9m\" (UniqueName: \"kubernetes.io/projected/65a53faf-94ad-48f3-b8e0-8642376f89ee-kube-api-access-9qb9m\") pod \"65a53faf-94ad-48f3-b8e0-8642376f89ee\" (UID: \"65a53faf-94ad-48f3-b8e0-8642376f89ee\") " Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.095134 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/65a53faf-94ad-48f3-b8e0-8642376f89ee-plugins-conf\") pod \"65a53faf-94ad-48f3-b8e0-8642376f89ee\" (UID: \"65a53faf-94ad-48f3-b8e0-8642376f89ee\") " Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.095167 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"65a53faf-94ad-48f3-b8e0-8642376f89ee\" (UID: \"65a53faf-94ad-48f3-b8e0-8642376f89ee\") " Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.095226 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/65a53faf-94ad-48f3-b8e0-8642376f89ee-config-data\") pod \"65a53faf-94ad-48f3-b8e0-8642376f89ee\" (UID: \"65a53faf-94ad-48f3-b8e0-8642376f89ee\") " Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.095260 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/65a53faf-94ad-48f3-b8e0-8642376f89ee-rabbitmq-plugins\") pod \"65a53faf-94ad-48f3-b8e0-8642376f89ee\" (UID: \"65a53faf-94ad-48f3-b8e0-8642376f89ee\") " Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.095310 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/65a53faf-94ad-48f3-b8e0-8642376f89ee-rabbitmq-erlang-cookie\") pod \"65a53faf-94ad-48f3-b8e0-8642376f89ee\" (UID: \"65a53faf-94ad-48f3-b8e0-8642376f89ee\") " Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.095358 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/65a53faf-94ad-48f3-b8e0-8642376f89ee-erlang-cookie-secret\") pod \"65a53faf-94ad-48f3-b8e0-8642376f89ee\" (UID: \"65a53faf-94ad-48f3-b8e0-8642376f89ee\") " Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.095393 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/65a53faf-94ad-48f3-b8e0-8642376f89ee-pod-info\") pod \"65a53faf-94ad-48f3-b8e0-8642376f89ee\" (UID: \"65a53faf-94ad-48f3-b8e0-8642376f89ee\") " Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.095452 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/65a53faf-94ad-48f3-b8e0-8642376f89ee-rabbitmq-confd\") pod \"65a53faf-94ad-48f3-b8e0-8642376f89ee\" (UID: \"65a53faf-94ad-48f3-b8e0-8642376f89ee\") " Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.095474 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/65a53faf-94ad-48f3-b8e0-8642376f89ee-server-conf\") pod \"65a53faf-94ad-48f3-b8e0-8642376f89ee\" (UID: \"65a53faf-94ad-48f3-b8e0-8642376f89ee\") " Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.098421 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.098733 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65a53faf-94ad-48f3-b8e0-8642376f89ee-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "65a53faf-94ad-48f3-b8e0-8642376f89ee" (UID: "65a53faf-94ad-48f3-b8e0-8642376f89ee"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.100551 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65a53faf-94ad-48f3-b8e0-8642376f89ee-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "65a53faf-94ad-48f3-b8e0-8642376f89ee" (UID: "65a53faf-94ad-48f3-b8e0-8642376f89ee"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.101775 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65a53faf-94ad-48f3-b8e0-8642376f89ee-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "65a53faf-94ad-48f3-b8e0-8642376f89ee" (UID: "65a53faf-94ad-48f3-b8e0-8642376f89ee"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.104061 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/65a53faf-94ad-48f3-b8e0-8642376f89ee-pod-info" (OuterVolumeSpecName: "pod-info") pod "65a53faf-94ad-48f3-b8e0-8642376f89ee" (UID: "65a53faf-94ad-48f3-b8e0-8642376f89ee"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.108131 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.108769 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65a53faf-94ad-48f3-b8e0-8642376f89ee-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "65a53faf-94ad-48f3-b8e0-8642376f89ee" (UID: "65a53faf-94ad-48f3-b8e0-8642376f89ee"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.108906 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "65a53faf-94ad-48f3-b8e0-8642376f89ee" (UID: "65a53faf-94ad-48f3-b8e0-8642376f89ee"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 15 05:56:22 crc kubenswrapper[4747]: E1215 05:56:22.109654 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65a53faf-94ad-48f3-b8e0-8642376f89ee" containerName="setup-container" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.109717 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="65a53faf-94ad-48f3-b8e0-8642376f89ee" containerName="setup-container" Dec 15 05:56:22 crc kubenswrapper[4747]: E1215 05:56:22.109782 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65a53faf-94ad-48f3-b8e0-8642376f89ee" containerName="rabbitmq" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.109830 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="65a53faf-94ad-48f3-b8e0-8642376f89ee" containerName="rabbitmq" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.110180 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="65a53faf-94ad-48f3-b8e0-8642376f89ee" containerName="rabbitmq" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.111118 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.120623 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.121382 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.121447 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.121706 4747 scope.go:117] "RemoveContainer" containerID="ee7a70f0b60728a6b77ed0468330e5d9e58738f0a9b357ea4016c118b3ef4d37" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.121822 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.122108 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.122242 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.122371 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-68tlq" Dec 15 05:56:22 crc kubenswrapper[4747]: E1215 05:56:22.123207 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee7a70f0b60728a6b77ed0468330e5d9e58738f0a9b357ea4016c118b3ef4d37\": container with ID starting with ee7a70f0b60728a6b77ed0468330e5d9e58738f0a9b357ea4016c118b3ef4d37 not found: ID does not exist" containerID="ee7a70f0b60728a6b77ed0468330e5d9e58738f0a9b357ea4016c118b3ef4d37" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.123256 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee7a70f0b60728a6b77ed0468330e5d9e58738f0a9b357ea4016c118b3ef4d37"} err="failed to get container status \"ee7a70f0b60728a6b77ed0468330e5d9e58738f0a9b357ea4016c118b3ef4d37\": rpc error: code = NotFound desc = could not find container \"ee7a70f0b60728a6b77ed0468330e5d9e58738f0a9b357ea4016c118b3ef4d37\": container with ID starting with ee7a70f0b60728a6b77ed0468330e5d9e58738f0a9b357ea4016c118b3ef4d37 not found: ID does not exist" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.123288 4747 scope.go:117] "RemoveContainer" containerID="8e013d9a657de63787a61a2c6aea79b4254a3e35c2c761dd928d98d5ed13bf52" Dec 15 05:56:22 crc kubenswrapper[4747]: E1215 05:56:22.130233 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e013d9a657de63787a61a2c6aea79b4254a3e35c2c761dd928d98d5ed13bf52\": container with ID starting with 8e013d9a657de63787a61a2c6aea79b4254a3e35c2c761dd928d98d5ed13bf52 not found: ID does not exist" containerID="8e013d9a657de63787a61a2c6aea79b4254a3e35c2c761dd928d98d5ed13bf52" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.130403 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e013d9a657de63787a61a2c6aea79b4254a3e35c2c761dd928d98d5ed13bf52"} err="failed to get container status \"8e013d9a657de63787a61a2c6aea79b4254a3e35c2c761dd928d98d5ed13bf52\": rpc error: code = NotFound desc = could not find container \"8e013d9a657de63787a61a2c6aea79b4254a3e35c2c761dd928d98d5ed13bf52\": container with ID starting with 8e013d9a657de63787a61a2c6aea79b4254a3e35c2c761dd928d98d5ed13bf52 not found: ID does not exist" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.133266 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.154547 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65a53faf-94ad-48f3-b8e0-8642376f89ee-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "65a53faf-94ad-48f3-b8e0-8642376f89ee" (UID: "65a53faf-94ad-48f3-b8e0-8642376f89ee"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.154662 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65a53faf-94ad-48f3-b8e0-8642376f89ee-kube-api-access-9qb9m" (OuterVolumeSpecName: "kube-api-access-9qb9m") pod "65a53faf-94ad-48f3-b8e0-8642376f89ee" (UID: "65a53faf-94ad-48f3-b8e0-8642376f89ee"). InnerVolumeSpecName "kube-api-access-9qb9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.167994 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65a53faf-94ad-48f3-b8e0-8642376f89ee-config-data" (OuterVolumeSpecName: "config-data") pod "65a53faf-94ad-48f3-b8e0-8642376f89ee" (UID: "65a53faf-94ad-48f3-b8e0-8642376f89ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.198747 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/65a53faf-94ad-48f3-b8e0-8642376f89ee-config-data\") on node \"crc\" DevicePath \"\"" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.198780 4747 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/65a53faf-94ad-48f3-b8e0-8642376f89ee-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.198796 4747 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/65a53faf-94ad-48f3-b8e0-8642376f89ee-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.198806 4747 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/65a53faf-94ad-48f3-b8e0-8642376f89ee-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.198818 4747 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/65a53faf-94ad-48f3-b8e0-8642376f89ee-pod-info\") on node \"crc\" DevicePath \"\"" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.198825 4747 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/65a53faf-94ad-48f3-b8e0-8642376f89ee-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.198835 4747 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/65a53faf-94ad-48f3-b8e0-8642376f89ee-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.198843 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qb9m\" (UniqueName: \"kubernetes.io/projected/65a53faf-94ad-48f3-b8e0-8642376f89ee-kube-api-access-9qb9m\") on node \"crc\" DevicePath \"\"" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.198873 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.205555 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65a53faf-94ad-48f3-b8e0-8642376f89ee-server-conf" (OuterVolumeSpecName: "server-conf") pod "65a53faf-94ad-48f3-b8e0-8642376f89ee" (UID: "65a53faf-94ad-48f3-b8e0-8642376f89ee"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.209066 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dd77c7755-47bd5"] Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.218010 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.218834 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65a53faf-94ad-48f3-b8e0-8642376f89ee-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "65a53faf-94ad-48f3-b8e0-8642376f89ee" (UID: "65a53faf-94ad-48f3-b8e0-8642376f89ee"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.308226 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7a527363-fdfb-4bbe-a50e-41923c5cc78c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7a527363-fdfb-4bbe-a50e-41923c5cc78c\") " pod="openstack/rabbitmq-server-0" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.308284 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"7a527363-fdfb-4bbe-a50e-41923c5cc78c\") " pod="openstack/rabbitmq-server-0" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.308365 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7a527363-fdfb-4bbe-a50e-41923c5cc78c-config-data\") pod \"rabbitmq-server-0\" (UID: \"7a527363-fdfb-4bbe-a50e-41923c5cc78c\") " pod="openstack/rabbitmq-server-0" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.308385 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7a527363-fdfb-4bbe-a50e-41923c5cc78c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7a527363-fdfb-4bbe-a50e-41923c5cc78c\") " pod="openstack/rabbitmq-server-0" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.308422 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7a527363-fdfb-4bbe-a50e-41923c5cc78c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7a527363-fdfb-4bbe-a50e-41923c5cc78c\") " pod="openstack/rabbitmq-server-0" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.308461 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7a527363-fdfb-4bbe-a50e-41923c5cc78c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7a527363-fdfb-4bbe-a50e-41923c5cc78c\") " pod="openstack/rabbitmq-server-0" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.308668 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7a527363-fdfb-4bbe-a50e-41923c5cc78c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7a527363-fdfb-4bbe-a50e-41923c5cc78c\") " pod="openstack/rabbitmq-server-0" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.308693 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzd7w\" (UniqueName: \"kubernetes.io/projected/7a527363-fdfb-4bbe-a50e-41923c5cc78c-kube-api-access-xzd7w\") pod \"rabbitmq-server-0\" (UID: \"7a527363-fdfb-4bbe-a50e-41923c5cc78c\") " pod="openstack/rabbitmq-server-0" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.308723 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7a527363-fdfb-4bbe-a50e-41923c5cc78c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7a527363-fdfb-4bbe-a50e-41923c5cc78c\") " pod="openstack/rabbitmq-server-0" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.308786 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7a527363-fdfb-4bbe-a50e-41923c5cc78c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7a527363-fdfb-4bbe-a50e-41923c5cc78c\") " pod="openstack/rabbitmq-server-0" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.308821 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7a527363-fdfb-4bbe-a50e-41923c5cc78c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7a527363-fdfb-4bbe-a50e-41923c5cc78c\") " pod="openstack/rabbitmq-server-0" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.308878 4747 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/65a53faf-94ad-48f3-b8e0-8642376f89ee-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.308890 4747 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/65a53faf-94ad-48f3-b8e0-8642376f89ee-server-conf\") on node \"crc\" DevicePath \"\"" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.308900 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.410282 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7a527363-fdfb-4bbe-a50e-41923c5cc78c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7a527363-fdfb-4bbe-a50e-41923c5cc78c\") " pod="openstack/rabbitmq-server-0" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.410340 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7a527363-fdfb-4bbe-a50e-41923c5cc78c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7a527363-fdfb-4bbe-a50e-41923c5cc78c\") " pod="openstack/rabbitmq-server-0" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.410367 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7a527363-fdfb-4bbe-a50e-41923c5cc78c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7a527363-fdfb-4bbe-a50e-41923c5cc78c\") " pod="openstack/rabbitmq-server-0" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.410384 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzd7w\" (UniqueName: \"kubernetes.io/projected/7a527363-fdfb-4bbe-a50e-41923c5cc78c-kube-api-access-xzd7w\") pod \"rabbitmq-server-0\" (UID: \"7a527363-fdfb-4bbe-a50e-41923c5cc78c\") " pod="openstack/rabbitmq-server-0" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.410407 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7a527363-fdfb-4bbe-a50e-41923c5cc78c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7a527363-fdfb-4bbe-a50e-41923c5cc78c\") " pod="openstack/rabbitmq-server-0" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.410446 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7a527363-fdfb-4bbe-a50e-41923c5cc78c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7a527363-fdfb-4bbe-a50e-41923c5cc78c\") " pod="openstack/rabbitmq-server-0" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.410467 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7a527363-fdfb-4bbe-a50e-41923c5cc78c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7a527363-fdfb-4bbe-a50e-41923c5cc78c\") " pod="openstack/rabbitmq-server-0" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.410520 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7a527363-fdfb-4bbe-a50e-41923c5cc78c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7a527363-fdfb-4bbe-a50e-41923c5cc78c\") " pod="openstack/rabbitmq-server-0" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.410543 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"7a527363-fdfb-4bbe-a50e-41923c5cc78c\") " pod="openstack/rabbitmq-server-0" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.410582 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7a527363-fdfb-4bbe-a50e-41923c5cc78c-config-data\") pod \"rabbitmq-server-0\" (UID: \"7a527363-fdfb-4bbe-a50e-41923c5cc78c\") " pod="openstack/rabbitmq-server-0" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.410601 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7a527363-fdfb-4bbe-a50e-41923c5cc78c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7a527363-fdfb-4bbe-a50e-41923c5cc78c\") " pod="openstack/rabbitmq-server-0" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.411839 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7a527363-fdfb-4bbe-a50e-41923c5cc78c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7a527363-fdfb-4bbe-a50e-41923c5cc78c\") " pod="openstack/rabbitmq-server-0" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.412707 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7a527363-fdfb-4bbe-a50e-41923c5cc78c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7a527363-fdfb-4bbe-a50e-41923c5cc78c\") " pod="openstack/rabbitmq-server-0" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.413007 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7a527363-fdfb-4bbe-a50e-41923c5cc78c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7a527363-fdfb-4bbe-a50e-41923c5cc78c\") " pod="openstack/rabbitmq-server-0" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.414489 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7a527363-fdfb-4bbe-a50e-41923c5cc78c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7a527363-fdfb-4bbe-a50e-41923c5cc78c\") " pod="openstack/rabbitmq-server-0" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.414729 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"7a527363-fdfb-4bbe-a50e-41923c5cc78c\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.415423 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7a527363-fdfb-4bbe-a50e-41923c5cc78c-config-data\") pod \"rabbitmq-server-0\" (UID: \"7a527363-fdfb-4bbe-a50e-41923c5cc78c\") " pod="openstack/rabbitmq-server-0" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.416562 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7a527363-fdfb-4bbe-a50e-41923c5cc78c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7a527363-fdfb-4bbe-a50e-41923c5cc78c\") " pod="openstack/rabbitmq-server-0" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.419541 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7a527363-fdfb-4bbe-a50e-41923c5cc78c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7a527363-fdfb-4bbe-a50e-41923c5cc78c\") " pod="openstack/rabbitmq-server-0" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.420273 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7a527363-fdfb-4bbe-a50e-41923c5cc78c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7a527363-fdfb-4bbe-a50e-41923c5cc78c\") " pod="openstack/rabbitmq-server-0" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.422481 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7a527363-fdfb-4bbe-a50e-41923c5cc78c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7a527363-fdfb-4bbe-a50e-41923c5cc78c\") " pod="openstack/rabbitmq-server-0" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.437580 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzd7w\" (UniqueName: \"kubernetes.io/projected/7a527363-fdfb-4bbe-a50e-41923c5cc78c-kube-api-access-xzd7w\") pod \"rabbitmq-server-0\" (UID: \"7a527363-fdfb-4bbe-a50e-41923c5cc78c\") " pod="openstack/rabbitmq-server-0" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.460070 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"7a527363-fdfb-4bbe-a50e-41923c5cc78c\") " pod="openstack/rabbitmq-server-0" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.479373 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.640206 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bece5e6-b345-4969-a563-81fb3706f8f1" path="/var/lib/kubelet/pods/9bece5e6-b345-4969-a563-81fb3706f8f1/volumes" Dec 15 05:56:22 crc kubenswrapper[4747]: I1215 05:56:22.909505 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.057422 4747 generic.go:334] "Generic (PLEG): container finished" podID="f54d034b-85e7-412e-9cdf-14bee08e155d" containerID="647a5ad1640386b0f0df1cabd4bd951728c170c37bbcfe06e531dfbc0cce62cd" exitCode=0 Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.057583 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dd77c7755-47bd5" event={"ID":"f54d034b-85e7-412e-9cdf-14bee08e155d","Type":"ContainerDied","Data":"647a5ad1640386b0f0df1cabd4bd951728c170c37bbcfe06e531dfbc0cce62cd"} Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.057621 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dd77c7755-47bd5" event={"ID":"f54d034b-85e7-412e-9cdf-14bee08e155d","Type":"ContainerStarted","Data":"7cb886486678ee7f0b68515ec4c5719adab7b651d6c334af34b70ef5bdf0a673"} Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.059501 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7a527363-fdfb-4bbe-a50e-41923c5cc78c","Type":"ContainerStarted","Data":"d8803c25685ec688dffdadc335f96d2cc904d8e2811a68cdeb43c09568386901"} Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.062552 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.164051 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.181492 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.194992 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.198590 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.201228 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.201261 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.201342 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.201660 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.201713 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-25dgb" Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.201747 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.206569 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.209036 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.331057 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/42363452-e04c-462e-8341-6f3f99392357-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"42363452-e04c-462e-8341-6f3f99392357\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.331215 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/42363452-e04c-462e-8341-6f3f99392357-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"42363452-e04c-462e-8341-6f3f99392357\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.331462 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/42363452-e04c-462e-8341-6f3f99392357-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"42363452-e04c-462e-8341-6f3f99392357\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.331530 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/42363452-e04c-462e-8341-6f3f99392357-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"42363452-e04c-462e-8341-6f3f99392357\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.331600 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"42363452-e04c-462e-8341-6f3f99392357\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.331713 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/42363452-e04c-462e-8341-6f3f99392357-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"42363452-e04c-462e-8341-6f3f99392357\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.331819 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/42363452-e04c-462e-8341-6f3f99392357-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"42363452-e04c-462e-8341-6f3f99392357\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.331959 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/42363452-e04c-462e-8341-6f3f99392357-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"42363452-e04c-462e-8341-6f3f99392357\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.332059 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lff8l\" (UniqueName: \"kubernetes.io/projected/42363452-e04c-462e-8341-6f3f99392357-kube-api-access-lff8l\") pod \"rabbitmq-cell1-server-0\" (UID: \"42363452-e04c-462e-8341-6f3f99392357\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.332151 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/42363452-e04c-462e-8341-6f3f99392357-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"42363452-e04c-462e-8341-6f3f99392357\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.332195 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/42363452-e04c-462e-8341-6f3f99392357-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"42363452-e04c-462e-8341-6f3f99392357\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.434239 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/42363452-e04c-462e-8341-6f3f99392357-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"42363452-e04c-462e-8341-6f3f99392357\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.434524 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/42363452-e04c-462e-8341-6f3f99392357-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"42363452-e04c-462e-8341-6f3f99392357\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.434640 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lff8l\" (UniqueName: \"kubernetes.io/projected/42363452-e04c-462e-8341-6f3f99392357-kube-api-access-lff8l\") pod \"rabbitmq-cell1-server-0\" (UID: \"42363452-e04c-462e-8341-6f3f99392357\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.434742 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/42363452-e04c-462e-8341-6f3f99392357-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"42363452-e04c-462e-8341-6f3f99392357\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.434837 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/42363452-e04c-462e-8341-6f3f99392357-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"42363452-e04c-462e-8341-6f3f99392357\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.434948 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/42363452-e04c-462e-8341-6f3f99392357-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"42363452-e04c-462e-8341-6f3f99392357\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.435074 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/42363452-e04c-462e-8341-6f3f99392357-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"42363452-e04c-462e-8341-6f3f99392357\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.435176 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/42363452-e04c-462e-8341-6f3f99392357-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"42363452-e04c-462e-8341-6f3f99392357\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.435340 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/42363452-e04c-462e-8341-6f3f99392357-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"42363452-e04c-462e-8341-6f3f99392357\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.435423 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/42363452-e04c-462e-8341-6f3f99392357-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"42363452-e04c-462e-8341-6f3f99392357\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.435522 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"42363452-e04c-462e-8341-6f3f99392357\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.435641 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/42363452-e04c-462e-8341-6f3f99392357-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"42363452-e04c-462e-8341-6f3f99392357\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.435653 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/42363452-e04c-462e-8341-6f3f99392357-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"42363452-e04c-462e-8341-6f3f99392357\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.436148 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"42363452-e04c-462e-8341-6f3f99392357\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.438801 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/42363452-e04c-462e-8341-6f3f99392357-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"42363452-e04c-462e-8341-6f3f99392357\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.439313 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/42363452-e04c-462e-8341-6f3f99392357-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"42363452-e04c-462e-8341-6f3f99392357\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.439490 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/42363452-e04c-462e-8341-6f3f99392357-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"42363452-e04c-462e-8341-6f3f99392357\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.439903 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/42363452-e04c-462e-8341-6f3f99392357-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"42363452-e04c-462e-8341-6f3f99392357\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.440632 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/42363452-e04c-462e-8341-6f3f99392357-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"42363452-e04c-462e-8341-6f3f99392357\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.442297 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/42363452-e04c-462e-8341-6f3f99392357-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"42363452-e04c-462e-8341-6f3f99392357\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.443995 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/42363452-e04c-462e-8341-6f3f99392357-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"42363452-e04c-462e-8341-6f3f99392357\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.449907 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lff8l\" (UniqueName: \"kubernetes.io/projected/42363452-e04c-462e-8341-6f3f99392357-kube-api-access-lff8l\") pod \"rabbitmq-cell1-server-0\" (UID: \"42363452-e04c-462e-8341-6f3f99392357\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.463617 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"42363452-e04c-462e-8341-6f3f99392357\") " pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:56:23 crc kubenswrapper[4747]: I1215 05:56:23.532223 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:56:24 crc kubenswrapper[4747]: I1215 05:56:24.058721 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 15 05:56:24 crc kubenswrapper[4747]: I1215 05:56:24.075343 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dd77c7755-47bd5" event={"ID":"f54d034b-85e7-412e-9cdf-14bee08e155d","Type":"ContainerStarted","Data":"246afba3e32d5343ba07469ed31606bddf0ba61125d5ab712c1c059e9df78305"} Dec 15 05:56:24 crc kubenswrapper[4747]: I1215 05:56:24.076191 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-dd77c7755-47bd5" Dec 15 05:56:24 crc kubenswrapper[4747]: I1215 05:56:24.077440 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"42363452-e04c-462e-8341-6f3f99392357","Type":"ContainerStarted","Data":"1d00460ce44867fcfa7b8e4c344346706d44058f8caa00963cae222eabd64270"} Dec 15 05:56:24 crc kubenswrapper[4747]: I1215 05:56:24.095130 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-dd77c7755-47bd5" podStartSLOduration=3.095104683 podStartE2EDuration="3.095104683s" podCreationTimestamp="2025-12-15 05:56:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:56:24.09210874 +0000 UTC m=+1147.788620657" watchObservedRunningTime="2025-12-15 05:56:24.095104683 +0000 UTC m=+1147.791616590" Dec 15 05:56:24 crc kubenswrapper[4747]: I1215 05:56:24.639677 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65a53faf-94ad-48f3-b8e0-8642376f89ee" path="/var/lib/kubelet/pods/65a53faf-94ad-48f3-b8e0-8642376f89ee/volumes" Dec 15 05:56:25 crc kubenswrapper[4747]: I1215 05:56:25.087674 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7a527363-fdfb-4bbe-a50e-41923c5cc78c","Type":"ContainerStarted","Data":"768177cd2805e18620b632f28bca28a5d4a857887e4b3679cc777325b7d87302"} Dec 15 05:56:26 crc kubenswrapper[4747]: I1215 05:56:26.117771 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"42363452-e04c-462e-8341-6f3f99392357","Type":"ContainerStarted","Data":"1c8ea96f9339a971f96546315c1cbb628285e68520446c06fc47281b35196e54"} Dec 15 05:56:31 crc kubenswrapper[4747]: I1215 05:56:31.677147 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-dd77c7755-47bd5" Dec 15 05:56:31 crc kubenswrapper[4747]: I1215 05:56:31.726684 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-864557ccdf-8grfz"] Dec 15 05:56:31 crc kubenswrapper[4747]: I1215 05:56:31.726961 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-864557ccdf-8grfz" podUID="fee4e87f-8126-4c7e-bbb1-898b81c4f9b0" containerName="dnsmasq-dns" containerID="cri-o://9d7fe17a320d3762e3e55f99d0793a54627fe100bc799b5c6b27e20f6a69466b" gracePeriod=10 Dec 15 05:56:31 crc kubenswrapper[4747]: I1215 05:56:31.822214 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-775bb8f95f-twm2m"] Dec 15 05:56:31 crc kubenswrapper[4747]: I1215 05:56:31.825697 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-775bb8f95f-twm2m" Dec 15 05:56:31 crc kubenswrapper[4747]: I1215 05:56:31.846695 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-775bb8f95f-twm2m"] Dec 15 05:56:31 crc kubenswrapper[4747]: I1215 05:56:31.925053 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ad0490f9-1430-4511-b8cc-139a6c656b48-dns-swift-storage-0\") pod \"dnsmasq-dns-775bb8f95f-twm2m\" (UID: \"ad0490f9-1430-4511-b8cc-139a6c656b48\") " pod="openstack/dnsmasq-dns-775bb8f95f-twm2m" Dec 15 05:56:31 crc kubenswrapper[4747]: I1215 05:56:31.925131 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad0490f9-1430-4511-b8cc-139a6c656b48-ovsdbserver-nb\") pod \"dnsmasq-dns-775bb8f95f-twm2m\" (UID: \"ad0490f9-1430-4511-b8cc-139a6c656b48\") " pod="openstack/dnsmasq-dns-775bb8f95f-twm2m" Dec 15 05:56:31 crc kubenswrapper[4747]: I1215 05:56:31.925318 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ad0490f9-1430-4511-b8cc-139a6c656b48-openstack-edpm-ipam\") pod \"dnsmasq-dns-775bb8f95f-twm2m\" (UID: \"ad0490f9-1430-4511-b8cc-139a6c656b48\") " pod="openstack/dnsmasq-dns-775bb8f95f-twm2m" Dec 15 05:56:31 crc kubenswrapper[4747]: I1215 05:56:31.925668 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad0490f9-1430-4511-b8cc-139a6c656b48-ovsdbserver-sb\") pod \"dnsmasq-dns-775bb8f95f-twm2m\" (UID: \"ad0490f9-1430-4511-b8cc-139a6c656b48\") " pod="openstack/dnsmasq-dns-775bb8f95f-twm2m" Dec 15 05:56:31 crc kubenswrapper[4747]: I1215 05:56:31.925888 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad0490f9-1430-4511-b8cc-139a6c656b48-config\") pod \"dnsmasq-dns-775bb8f95f-twm2m\" (UID: \"ad0490f9-1430-4511-b8cc-139a6c656b48\") " pod="openstack/dnsmasq-dns-775bb8f95f-twm2m" Dec 15 05:56:31 crc kubenswrapper[4747]: I1215 05:56:31.925919 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cztkv\" (UniqueName: \"kubernetes.io/projected/ad0490f9-1430-4511-b8cc-139a6c656b48-kube-api-access-cztkv\") pod \"dnsmasq-dns-775bb8f95f-twm2m\" (UID: \"ad0490f9-1430-4511-b8cc-139a6c656b48\") " pod="openstack/dnsmasq-dns-775bb8f95f-twm2m" Dec 15 05:56:31 crc kubenswrapper[4747]: I1215 05:56:31.926052 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad0490f9-1430-4511-b8cc-139a6c656b48-dns-svc\") pod \"dnsmasq-dns-775bb8f95f-twm2m\" (UID: \"ad0490f9-1430-4511-b8cc-139a6c656b48\") " pod="openstack/dnsmasq-dns-775bb8f95f-twm2m" Dec 15 05:56:32 crc kubenswrapper[4747]: I1215 05:56:32.028461 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ad0490f9-1430-4511-b8cc-139a6c656b48-openstack-edpm-ipam\") pod \"dnsmasq-dns-775bb8f95f-twm2m\" (UID: \"ad0490f9-1430-4511-b8cc-139a6c656b48\") " pod="openstack/dnsmasq-dns-775bb8f95f-twm2m" Dec 15 05:56:32 crc kubenswrapper[4747]: I1215 05:56:32.028570 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad0490f9-1430-4511-b8cc-139a6c656b48-ovsdbserver-sb\") pod \"dnsmasq-dns-775bb8f95f-twm2m\" (UID: \"ad0490f9-1430-4511-b8cc-139a6c656b48\") " pod="openstack/dnsmasq-dns-775bb8f95f-twm2m" Dec 15 05:56:32 crc kubenswrapper[4747]: I1215 05:56:32.028619 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad0490f9-1430-4511-b8cc-139a6c656b48-config\") pod \"dnsmasq-dns-775bb8f95f-twm2m\" (UID: \"ad0490f9-1430-4511-b8cc-139a6c656b48\") " pod="openstack/dnsmasq-dns-775bb8f95f-twm2m" Dec 15 05:56:32 crc kubenswrapper[4747]: I1215 05:56:32.028641 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cztkv\" (UniqueName: \"kubernetes.io/projected/ad0490f9-1430-4511-b8cc-139a6c656b48-kube-api-access-cztkv\") pod \"dnsmasq-dns-775bb8f95f-twm2m\" (UID: \"ad0490f9-1430-4511-b8cc-139a6c656b48\") " pod="openstack/dnsmasq-dns-775bb8f95f-twm2m" Dec 15 05:56:32 crc kubenswrapper[4747]: I1215 05:56:32.028675 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad0490f9-1430-4511-b8cc-139a6c656b48-dns-svc\") pod \"dnsmasq-dns-775bb8f95f-twm2m\" (UID: \"ad0490f9-1430-4511-b8cc-139a6c656b48\") " pod="openstack/dnsmasq-dns-775bb8f95f-twm2m" Dec 15 05:56:32 crc kubenswrapper[4747]: I1215 05:56:32.028735 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ad0490f9-1430-4511-b8cc-139a6c656b48-dns-swift-storage-0\") pod \"dnsmasq-dns-775bb8f95f-twm2m\" (UID: \"ad0490f9-1430-4511-b8cc-139a6c656b48\") " pod="openstack/dnsmasq-dns-775bb8f95f-twm2m" Dec 15 05:56:32 crc kubenswrapper[4747]: I1215 05:56:32.028765 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad0490f9-1430-4511-b8cc-139a6c656b48-ovsdbserver-nb\") pod \"dnsmasq-dns-775bb8f95f-twm2m\" (UID: \"ad0490f9-1430-4511-b8cc-139a6c656b48\") " pod="openstack/dnsmasq-dns-775bb8f95f-twm2m" Dec 15 05:56:32 crc kubenswrapper[4747]: I1215 05:56:32.029398 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad0490f9-1430-4511-b8cc-139a6c656b48-ovsdbserver-sb\") pod \"dnsmasq-dns-775bb8f95f-twm2m\" (UID: \"ad0490f9-1430-4511-b8cc-139a6c656b48\") " pod="openstack/dnsmasq-dns-775bb8f95f-twm2m" Dec 15 05:56:32 crc kubenswrapper[4747]: I1215 05:56:32.029481 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad0490f9-1430-4511-b8cc-139a6c656b48-ovsdbserver-nb\") pod \"dnsmasq-dns-775bb8f95f-twm2m\" (UID: \"ad0490f9-1430-4511-b8cc-139a6c656b48\") " pod="openstack/dnsmasq-dns-775bb8f95f-twm2m" Dec 15 05:56:32 crc kubenswrapper[4747]: I1215 05:56:32.030379 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad0490f9-1430-4511-b8cc-139a6c656b48-config\") pod \"dnsmasq-dns-775bb8f95f-twm2m\" (UID: \"ad0490f9-1430-4511-b8cc-139a6c656b48\") " pod="openstack/dnsmasq-dns-775bb8f95f-twm2m" Dec 15 05:56:32 crc kubenswrapper[4747]: I1215 05:56:32.030507 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ad0490f9-1430-4511-b8cc-139a6c656b48-dns-swift-storage-0\") pod \"dnsmasq-dns-775bb8f95f-twm2m\" (UID: \"ad0490f9-1430-4511-b8cc-139a6c656b48\") " pod="openstack/dnsmasq-dns-775bb8f95f-twm2m" Dec 15 05:56:32 crc kubenswrapper[4747]: I1215 05:56:32.030572 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad0490f9-1430-4511-b8cc-139a6c656b48-dns-svc\") pod \"dnsmasq-dns-775bb8f95f-twm2m\" (UID: \"ad0490f9-1430-4511-b8cc-139a6c656b48\") " pod="openstack/dnsmasq-dns-775bb8f95f-twm2m" Dec 15 05:56:32 crc kubenswrapper[4747]: I1215 05:56:32.031454 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ad0490f9-1430-4511-b8cc-139a6c656b48-openstack-edpm-ipam\") pod \"dnsmasq-dns-775bb8f95f-twm2m\" (UID: \"ad0490f9-1430-4511-b8cc-139a6c656b48\") " pod="openstack/dnsmasq-dns-775bb8f95f-twm2m" Dec 15 05:56:32 crc kubenswrapper[4747]: I1215 05:56:32.045340 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cztkv\" (UniqueName: \"kubernetes.io/projected/ad0490f9-1430-4511-b8cc-139a6c656b48-kube-api-access-cztkv\") pod \"dnsmasq-dns-775bb8f95f-twm2m\" (UID: \"ad0490f9-1430-4511-b8cc-139a6c656b48\") " pod="openstack/dnsmasq-dns-775bb8f95f-twm2m" Dec 15 05:56:32 crc kubenswrapper[4747]: I1215 05:56:32.142819 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864557ccdf-8grfz" Dec 15 05:56:32 crc kubenswrapper[4747]: I1215 05:56:32.169977 4747 generic.go:334] "Generic (PLEG): container finished" podID="fee4e87f-8126-4c7e-bbb1-898b81c4f9b0" containerID="9d7fe17a320d3762e3e55f99d0793a54627fe100bc799b5c6b27e20f6a69466b" exitCode=0 Dec 15 05:56:32 crc kubenswrapper[4747]: I1215 05:56:32.170024 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864557ccdf-8grfz" event={"ID":"fee4e87f-8126-4c7e-bbb1-898b81c4f9b0","Type":"ContainerDied","Data":"9d7fe17a320d3762e3e55f99d0793a54627fe100bc799b5c6b27e20f6a69466b"} Dec 15 05:56:32 crc kubenswrapper[4747]: I1215 05:56:32.170055 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864557ccdf-8grfz" event={"ID":"fee4e87f-8126-4c7e-bbb1-898b81c4f9b0","Type":"ContainerDied","Data":"190f3ab15a83ba1313fdae20194c4aee5c486b7ce777f9bd4825a9887df3a647"} Dec 15 05:56:32 crc kubenswrapper[4747]: I1215 05:56:32.170078 4747 scope.go:117] "RemoveContainer" containerID="9d7fe17a320d3762e3e55f99d0793a54627fe100bc799b5c6b27e20f6a69466b" Dec 15 05:56:32 crc kubenswrapper[4747]: I1215 05:56:32.170113 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864557ccdf-8grfz" Dec 15 05:56:32 crc kubenswrapper[4747]: I1215 05:56:32.182704 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-775bb8f95f-twm2m" Dec 15 05:56:32 crc kubenswrapper[4747]: I1215 05:56:32.194102 4747 scope.go:117] "RemoveContainer" containerID="fef3cbd2a130f0a1c7e40d974ad577095cbabf939d7a0ff4f8213349e9370411" Dec 15 05:56:32 crc kubenswrapper[4747]: I1215 05:56:32.217876 4747 scope.go:117] "RemoveContainer" containerID="9d7fe17a320d3762e3e55f99d0793a54627fe100bc799b5c6b27e20f6a69466b" Dec 15 05:56:32 crc kubenswrapper[4747]: E1215 05:56:32.218371 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d7fe17a320d3762e3e55f99d0793a54627fe100bc799b5c6b27e20f6a69466b\": container with ID starting with 9d7fe17a320d3762e3e55f99d0793a54627fe100bc799b5c6b27e20f6a69466b not found: ID does not exist" containerID="9d7fe17a320d3762e3e55f99d0793a54627fe100bc799b5c6b27e20f6a69466b" Dec 15 05:56:32 crc kubenswrapper[4747]: I1215 05:56:32.218394 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d7fe17a320d3762e3e55f99d0793a54627fe100bc799b5c6b27e20f6a69466b"} err="failed to get container status \"9d7fe17a320d3762e3e55f99d0793a54627fe100bc799b5c6b27e20f6a69466b\": rpc error: code = NotFound desc = could not find container \"9d7fe17a320d3762e3e55f99d0793a54627fe100bc799b5c6b27e20f6a69466b\": container with ID starting with 9d7fe17a320d3762e3e55f99d0793a54627fe100bc799b5c6b27e20f6a69466b not found: ID does not exist" Dec 15 05:56:32 crc kubenswrapper[4747]: I1215 05:56:32.218415 4747 scope.go:117] "RemoveContainer" containerID="fef3cbd2a130f0a1c7e40d974ad577095cbabf939d7a0ff4f8213349e9370411" Dec 15 05:56:32 crc kubenswrapper[4747]: E1215 05:56:32.218662 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fef3cbd2a130f0a1c7e40d974ad577095cbabf939d7a0ff4f8213349e9370411\": container with ID starting with fef3cbd2a130f0a1c7e40d974ad577095cbabf939d7a0ff4f8213349e9370411 not found: ID does not exist" containerID="fef3cbd2a130f0a1c7e40d974ad577095cbabf939d7a0ff4f8213349e9370411" Dec 15 05:56:32 crc kubenswrapper[4747]: I1215 05:56:32.218677 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fef3cbd2a130f0a1c7e40d974ad577095cbabf939d7a0ff4f8213349e9370411"} err="failed to get container status \"fef3cbd2a130f0a1c7e40d974ad577095cbabf939d7a0ff4f8213349e9370411\": rpc error: code = NotFound desc = could not find container \"fef3cbd2a130f0a1c7e40d974ad577095cbabf939d7a0ff4f8213349e9370411\": container with ID starting with fef3cbd2a130f0a1c7e40d974ad577095cbabf939d7a0ff4f8213349e9370411 not found: ID does not exist" Dec 15 05:56:32 crc kubenswrapper[4747]: I1215 05:56:32.335220 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwrjm\" (UniqueName: \"kubernetes.io/projected/fee4e87f-8126-4c7e-bbb1-898b81c4f9b0-kube-api-access-nwrjm\") pod \"fee4e87f-8126-4c7e-bbb1-898b81c4f9b0\" (UID: \"fee4e87f-8126-4c7e-bbb1-898b81c4f9b0\") " Dec 15 05:56:32 crc kubenswrapper[4747]: I1215 05:56:32.335281 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fee4e87f-8126-4c7e-bbb1-898b81c4f9b0-ovsdbserver-sb\") pod \"fee4e87f-8126-4c7e-bbb1-898b81c4f9b0\" (UID: \"fee4e87f-8126-4c7e-bbb1-898b81c4f9b0\") " Dec 15 05:56:32 crc kubenswrapper[4747]: I1215 05:56:32.335304 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fee4e87f-8126-4c7e-bbb1-898b81c4f9b0-dns-swift-storage-0\") pod \"fee4e87f-8126-4c7e-bbb1-898b81c4f9b0\" (UID: \"fee4e87f-8126-4c7e-bbb1-898b81c4f9b0\") " Dec 15 05:56:32 crc kubenswrapper[4747]: I1215 05:56:32.335440 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fee4e87f-8126-4c7e-bbb1-898b81c4f9b0-config\") pod \"fee4e87f-8126-4c7e-bbb1-898b81c4f9b0\" (UID: \"fee4e87f-8126-4c7e-bbb1-898b81c4f9b0\") " Dec 15 05:56:32 crc kubenswrapper[4747]: I1215 05:56:32.335551 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fee4e87f-8126-4c7e-bbb1-898b81c4f9b0-dns-svc\") pod \"fee4e87f-8126-4c7e-bbb1-898b81c4f9b0\" (UID: \"fee4e87f-8126-4c7e-bbb1-898b81c4f9b0\") " Dec 15 05:56:32 crc kubenswrapper[4747]: I1215 05:56:32.335579 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fee4e87f-8126-4c7e-bbb1-898b81c4f9b0-ovsdbserver-nb\") pod \"fee4e87f-8126-4c7e-bbb1-898b81c4f9b0\" (UID: \"fee4e87f-8126-4c7e-bbb1-898b81c4f9b0\") " Dec 15 05:56:32 crc kubenswrapper[4747]: I1215 05:56:32.340342 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fee4e87f-8126-4c7e-bbb1-898b81c4f9b0-kube-api-access-nwrjm" (OuterVolumeSpecName: "kube-api-access-nwrjm") pod "fee4e87f-8126-4c7e-bbb1-898b81c4f9b0" (UID: "fee4e87f-8126-4c7e-bbb1-898b81c4f9b0"). InnerVolumeSpecName "kube-api-access-nwrjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:56:32 crc kubenswrapper[4747]: I1215 05:56:32.375478 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fee4e87f-8126-4c7e-bbb1-898b81c4f9b0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fee4e87f-8126-4c7e-bbb1-898b81c4f9b0" (UID: "fee4e87f-8126-4c7e-bbb1-898b81c4f9b0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:56:32 crc kubenswrapper[4747]: I1215 05:56:32.376470 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fee4e87f-8126-4c7e-bbb1-898b81c4f9b0-config" (OuterVolumeSpecName: "config") pod "fee4e87f-8126-4c7e-bbb1-898b81c4f9b0" (UID: "fee4e87f-8126-4c7e-bbb1-898b81c4f9b0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:56:32 crc kubenswrapper[4747]: I1215 05:56:32.382588 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fee4e87f-8126-4c7e-bbb1-898b81c4f9b0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fee4e87f-8126-4c7e-bbb1-898b81c4f9b0" (UID: "fee4e87f-8126-4c7e-bbb1-898b81c4f9b0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:56:32 crc kubenswrapper[4747]: I1215 05:56:32.382782 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fee4e87f-8126-4c7e-bbb1-898b81c4f9b0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fee4e87f-8126-4c7e-bbb1-898b81c4f9b0" (UID: "fee4e87f-8126-4c7e-bbb1-898b81c4f9b0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:56:32 crc kubenswrapper[4747]: I1215 05:56:32.385336 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fee4e87f-8126-4c7e-bbb1-898b81c4f9b0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fee4e87f-8126-4c7e-bbb1-898b81c4f9b0" (UID: "fee4e87f-8126-4c7e-bbb1-898b81c4f9b0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:56:32 crc kubenswrapper[4747]: I1215 05:56:32.439384 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwrjm\" (UniqueName: \"kubernetes.io/projected/fee4e87f-8126-4c7e-bbb1-898b81c4f9b0-kube-api-access-nwrjm\") on node \"crc\" DevicePath \"\"" Dec 15 05:56:32 crc kubenswrapper[4747]: I1215 05:56:32.439426 4747 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fee4e87f-8126-4c7e-bbb1-898b81c4f9b0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 15 05:56:32 crc kubenswrapper[4747]: I1215 05:56:32.439437 4747 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fee4e87f-8126-4c7e-bbb1-898b81c4f9b0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 15 05:56:32 crc kubenswrapper[4747]: I1215 05:56:32.439448 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fee4e87f-8126-4c7e-bbb1-898b81c4f9b0-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:56:32 crc kubenswrapper[4747]: I1215 05:56:32.439461 4747 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fee4e87f-8126-4c7e-bbb1-898b81c4f9b0-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 15 05:56:32 crc kubenswrapper[4747]: I1215 05:56:32.439471 4747 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fee4e87f-8126-4c7e-bbb1-898b81c4f9b0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 15 05:56:32 crc kubenswrapper[4747]: I1215 05:56:32.508280 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-864557ccdf-8grfz"] Dec 15 05:56:32 crc kubenswrapper[4747]: I1215 05:56:32.516395 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-864557ccdf-8grfz"] Dec 15 05:56:32 crc kubenswrapper[4747]: I1215 05:56:32.597302 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-775bb8f95f-twm2m"] Dec 15 05:56:32 crc kubenswrapper[4747]: I1215 05:56:32.647491 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fee4e87f-8126-4c7e-bbb1-898b81c4f9b0" path="/var/lib/kubelet/pods/fee4e87f-8126-4c7e-bbb1-898b81c4f9b0/volumes" Dec 15 05:56:33 crc kubenswrapper[4747]: I1215 05:56:33.182108 4747 generic.go:334] "Generic (PLEG): container finished" podID="ad0490f9-1430-4511-b8cc-139a6c656b48" containerID="697c361d154c7a57a90c85915549b8980fe95e0dec139efbd23ae9d3d5b937c0" exitCode=0 Dec 15 05:56:33 crc kubenswrapper[4747]: I1215 05:56:33.182239 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-775bb8f95f-twm2m" event={"ID":"ad0490f9-1430-4511-b8cc-139a6c656b48","Type":"ContainerDied","Data":"697c361d154c7a57a90c85915549b8980fe95e0dec139efbd23ae9d3d5b937c0"} Dec 15 05:56:33 crc kubenswrapper[4747]: I1215 05:56:33.182624 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-775bb8f95f-twm2m" event={"ID":"ad0490f9-1430-4511-b8cc-139a6c656b48","Type":"ContainerStarted","Data":"b46e43bf04bbb87d01c3e4c9c6ece2bd34f8b0c16d8f9f43f87c24fdeb66c22f"} Dec 15 05:56:34 crc kubenswrapper[4747]: I1215 05:56:34.192969 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-775bb8f95f-twm2m" event={"ID":"ad0490f9-1430-4511-b8cc-139a6c656b48","Type":"ContainerStarted","Data":"ecb2ce06a588706b17f3121676fa0b01ae85bebf45ec4c6262ea10f6b52a1ed5"} Dec 15 05:56:34 crc kubenswrapper[4747]: I1215 05:56:34.193684 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-775bb8f95f-twm2m" Dec 15 05:56:34 crc kubenswrapper[4747]: I1215 05:56:34.218745 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-775bb8f95f-twm2m" podStartSLOduration=3.218728142 podStartE2EDuration="3.218728142s" podCreationTimestamp="2025-12-15 05:56:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:56:34.212660854 +0000 UTC m=+1157.909172771" watchObservedRunningTime="2025-12-15 05:56:34.218728142 +0000 UTC m=+1157.915240060" Dec 15 05:56:42 crc kubenswrapper[4747]: I1215 05:56:42.185200 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-775bb8f95f-twm2m" Dec 15 05:56:42 crc kubenswrapper[4747]: I1215 05:56:42.241070 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dd77c7755-47bd5"] Dec 15 05:56:42 crc kubenswrapper[4747]: I1215 05:56:42.241280 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-dd77c7755-47bd5" podUID="f54d034b-85e7-412e-9cdf-14bee08e155d" containerName="dnsmasq-dns" containerID="cri-o://246afba3e32d5343ba07469ed31606bddf0ba61125d5ab712c1c059e9df78305" gracePeriod=10 Dec 15 05:56:42 crc kubenswrapper[4747]: I1215 05:56:42.661637 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dd77c7755-47bd5" Dec 15 05:56:42 crc kubenswrapper[4747]: I1215 05:56:42.702512 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f54d034b-85e7-412e-9cdf-14bee08e155d-dns-svc\") pod \"f54d034b-85e7-412e-9cdf-14bee08e155d\" (UID: \"f54d034b-85e7-412e-9cdf-14bee08e155d\") " Dec 15 05:56:42 crc kubenswrapper[4747]: I1215 05:56:42.702568 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f54d034b-85e7-412e-9cdf-14bee08e155d-config\") pod \"f54d034b-85e7-412e-9cdf-14bee08e155d\" (UID: \"f54d034b-85e7-412e-9cdf-14bee08e155d\") " Dec 15 05:56:42 crc kubenswrapper[4747]: I1215 05:56:42.702595 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f54d034b-85e7-412e-9cdf-14bee08e155d-openstack-edpm-ipam\") pod \"f54d034b-85e7-412e-9cdf-14bee08e155d\" (UID: \"f54d034b-85e7-412e-9cdf-14bee08e155d\") " Dec 15 05:56:42 crc kubenswrapper[4747]: I1215 05:56:42.702623 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f54d034b-85e7-412e-9cdf-14bee08e155d-ovsdbserver-sb\") pod \"f54d034b-85e7-412e-9cdf-14bee08e155d\" (UID: \"f54d034b-85e7-412e-9cdf-14bee08e155d\") " Dec 15 05:56:42 crc kubenswrapper[4747]: I1215 05:56:42.702710 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f54d034b-85e7-412e-9cdf-14bee08e155d-ovsdbserver-nb\") pod \"f54d034b-85e7-412e-9cdf-14bee08e155d\" (UID: \"f54d034b-85e7-412e-9cdf-14bee08e155d\") " Dec 15 05:56:42 crc kubenswrapper[4747]: I1215 05:56:42.702755 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72p9n\" (UniqueName: \"kubernetes.io/projected/f54d034b-85e7-412e-9cdf-14bee08e155d-kube-api-access-72p9n\") pod \"f54d034b-85e7-412e-9cdf-14bee08e155d\" (UID: \"f54d034b-85e7-412e-9cdf-14bee08e155d\") " Dec 15 05:56:42 crc kubenswrapper[4747]: I1215 05:56:42.702773 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f54d034b-85e7-412e-9cdf-14bee08e155d-dns-swift-storage-0\") pod \"f54d034b-85e7-412e-9cdf-14bee08e155d\" (UID: \"f54d034b-85e7-412e-9cdf-14bee08e155d\") " Dec 15 05:56:42 crc kubenswrapper[4747]: I1215 05:56:42.712502 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f54d034b-85e7-412e-9cdf-14bee08e155d-kube-api-access-72p9n" (OuterVolumeSpecName: "kube-api-access-72p9n") pod "f54d034b-85e7-412e-9cdf-14bee08e155d" (UID: "f54d034b-85e7-412e-9cdf-14bee08e155d"). InnerVolumeSpecName "kube-api-access-72p9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:56:42 crc kubenswrapper[4747]: I1215 05:56:42.756338 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f54d034b-85e7-412e-9cdf-14bee08e155d-config" (OuterVolumeSpecName: "config") pod "f54d034b-85e7-412e-9cdf-14bee08e155d" (UID: "f54d034b-85e7-412e-9cdf-14bee08e155d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:56:42 crc kubenswrapper[4747]: I1215 05:56:42.756590 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f54d034b-85e7-412e-9cdf-14bee08e155d-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "f54d034b-85e7-412e-9cdf-14bee08e155d" (UID: "f54d034b-85e7-412e-9cdf-14bee08e155d"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:56:42 crc kubenswrapper[4747]: I1215 05:56:42.756618 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f54d034b-85e7-412e-9cdf-14bee08e155d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f54d034b-85e7-412e-9cdf-14bee08e155d" (UID: "f54d034b-85e7-412e-9cdf-14bee08e155d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:56:42 crc kubenswrapper[4747]: I1215 05:56:42.761115 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f54d034b-85e7-412e-9cdf-14bee08e155d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f54d034b-85e7-412e-9cdf-14bee08e155d" (UID: "f54d034b-85e7-412e-9cdf-14bee08e155d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:56:42 crc kubenswrapper[4747]: I1215 05:56:42.761294 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f54d034b-85e7-412e-9cdf-14bee08e155d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f54d034b-85e7-412e-9cdf-14bee08e155d" (UID: "f54d034b-85e7-412e-9cdf-14bee08e155d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:56:42 crc kubenswrapper[4747]: I1215 05:56:42.761588 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f54d034b-85e7-412e-9cdf-14bee08e155d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f54d034b-85e7-412e-9cdf-14bee08e155d" (UID: "f54d034b-85e7-412e-9cdf-14bee08e155d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 05:56:42 crc kubenswrapper[4747]: I1215 05:56:42.805867 4747 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f54d034b-85e7-412e-9cdf-14bee08e155d-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 15 05:56:42 crc kubenswrapper[4747]: I1215 05:56:42.805900 4747 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f54d034b-85e7-412e-9cdf-14bee08e155d-config\") on node \"crc\" DevicePath \"\"" Dec 15 05:56:42 crc kubenswrapper[4747]: I1215 05:56:42.805911 4747 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f54d034b-85e7-412e-9cdf-14bee08e155d-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 15 05:56:42 crc kubenswrapper[4747]: I1215 05:56:42.805938 4747 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f54d034b-85e7-412e-9cdf-14bee08e155d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 15 05:56:42 crc kubenswrapper[4747]: I1215 05:56:42.805953 4747 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f54d034b-85e7-412e-9cdf-14bee08e155d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 15 05:56:42 crc kubenswrapper[4747]: I1215 05:56:42.805963 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72p9n\" (UniqueName: \"kubernetes.io/projected/f54d034b-85e7-412e-9cdf-14bee08e155d-kube-api-access-72p9n\") on node \"crc\" DevicePath \"\"" Dec 15 05:56:42 crc kubenswrapper[4747]: I1215 05:56:42.805972 4747 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f54d034b-85e7-412e-9cdf-14bee08e155d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 15 05:56:43 crc kubenswrapper[4747]: I1215 05:56:43.272634 4747 generic.go:334] "Generic (PLEG): container finished" podID="f54d034b-85e7-412e-9cdf-14bee08e155d" containerID="246afba3e32d5343ba07469ed31606bddf0ba61125d5ab712c1c059e9df78305" exitCode=0 Dec 15 05:56:43 crc kubenswrapper[4747]: I1215 05:56:43.272694 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dd77c7755-47bd5" Dec 15 05:56:43 crc kubenswrapper[4747]: I1215 05:56:43.272714 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dd77c7755-47bd5" event={"ID":"f54d034b-85e7-412e-9cdf-14bee08e155d","Type":"ContainerDied","Data":"246afba3e32d5343ba07469ed31606bddf0ba61125d5ab712c1c059e9df78305"} Dec 15 05:56:43 crc kubenswrapper[4747]: I1215 05:56:43.273149 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dd77c7755-47bd5" event={"ID":"f54d034b-85e7-412e-9cdf-14bee08e155d","Type":"ContainerDied","Data":"7cb886486678ee7f0b68515ec4c5719adab7b651d6c334af34b70ef5bdf0a673"} Dec 15 05:56:43 crc kubenswrapper[4747]: I1215 05:56:43.273180 4747 scope.go:117] "RemoveContainer" containerID="246afba3e32d5343ba07469ed31606bddf0ba61125d5ab712c1c059e9df78305" Dec 15 05:56:43 crc kubenswrapper[4747]: I1215 05:56:43.300461 4747 scope.go:117] "RemoveContainer" containerID="647a5ad1640386b0f0df1cabd4bd951728c170c37bbcfe06e531dfbc0cce62cd" Dec 15 05:56:43 crc kubenswrapper[4747]: I1215 05:56:43.308373 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dd77c7755-47bd5"] Dec 15 05:56:43 crc kubenswrapper[4747]: I1215 05:56:43.313980 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-dd77c7755-47bd5"] Dec 15 05:56:43 crc kubenswrapper[4747]: I1215 05:56:43.319950 4747 scope.go:117] "RemoveContainer" containerID="246afba3e32d5343ba07469ed31606bddf0ba61125d5ab712c1c059e9df78305" Dec 15 05:56:43 crc kubenswrapper[4747]: E1215 05:56:43.320275 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"246afba3e32d5343ba07469ed31606bddf0ba61125d5ab712c1c059e9df78305\": container with ID starting with 246afba3e32d5343ba07469ed31606bddf0ba61125d5ab712c1c059e9df78305 not found: ID does not exist" containerID="246afba3e32d5343ba07469ed31606bddf0ba61125d5ab712c1c059e9df78305" Dec 15 05:56:43 crc kubenswrapper[4747]: I1215 05:56:43.320317 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"246afba3e32d5343ba07469ed31606bddf0ba61125d5ab712c1c059e9df78305"} err="failed to get container status \"246afba3e32d5343ba07469ed31606bddf0ba61125d5ab712c1c059e9df78305\": rpc error: code = NotFound desc = could not find container \"246afba3e32d5343ba07469ed31606bddf0ba61125d5ab712c1c059e9df78305\": container with ID starting with 246afba3e32d5343ba07469ed31606bddf0ba61125d5ab712c1c059e9df78305 not found: ID does not exist" Dec 15 05:56:43 crc kubenswrapper[4747]: I1215 05:56:43.320344 4747 scope.go:117] "RemoveContainer" containerID="647a5ad1640386b0f0df1cabd4bd951728c170c37bbcfe06e531dfbc0cce62cd" Dec 15 05:56:43 crc kubenswrapper[4747]: E1215 05:56:43.320746 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"647a5ad1640386b0f0df1cabd4bd951728c170c37bbcfe06e531dfbc0cce62cd\": container with ID starting with 647a5ad1640386b0f0df1cabd4bd951728c170c37bbcfe06e531dfbc0cce62cd not found: ID does not exist" containerID="647a5ad1640386b0f0df1cabd4bd951728c170c37bbcfe06e531dfbc0cce62cd" Dec 15 05:56:43 crc kubenswrapper[4747]: I1215 05:56:43.320778 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"647a5ad1640386b0f0df1cabd4bd951728c170c37bbcfe06e531dfbc0cce62cd"} err="failed to get container status \"647a5ad1640386b0f0df1cabd4bd951728c170c37bbcfe06e531dfbc0cce62cd\": rpc error: code = NotFound desc = could not find container \"647a5ad1640386b0f0df1cabd4bd951728c170c37bbcfe06e531dfbc0cce62cd\": container with ID starting with 647a5ad1640386b0f0df1cabd4bd951728c170c37bbcfe06e531dfbc0cce62cd not found: ID does not exist" Dec 15 05:56:44 crc kubenswrapper[4747]: I1215 05:56:44.639503 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f54d034b-85e7-412e-9cdf-14bee08e155d" path="/var/lib/kubelet/pods/f54d034b-85e7-412e-9cdf-14bee08e155d/volumes" Dec 15 05:56:55 crc kubenswrapper[4747]: I1215 05:56:55.254457 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kps28"] Dec 15 05:56:55 crc kubenswrapper[4747]: E1215 05:56:55.255449 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f54d034b-85e7-412e-9cdf-14bee08e155d" containerName="init" Dec 15 05:56:55 crc kubenswrapper[4747]: I1215 05:56:55.255464 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="f54d034b-85e7-412e-9cdf-14bee08e155d" containerName="init" Dec 15 05:56:55 crc kubenswrapper[4747]: E1215 05:56:55.255476 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fee4e87f-8126-4c7e-bbb1-898b81c4f9b0" containerName="init" Dec 15 05:56:55 crc kubenswrapper[4747]: I1215 05:56:55.255482 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="fee4e87f-8126-4c7e-bbb1-898b81c4f9b0" containerName="init" Dec 15 05:56:55 crc kubenswrapper[4747]: E1215 05:56:55.255514 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f54d034b-85e7-412e-9cdf-14bee08e155d" containerName="dnsmasq-dns" Dec 15 05:56:55 crc kubenswrapper[4747]: I1215 05:56:55.255520 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="f54d034b-85e7-412e-9cdf-14bee08e155d" containerName="dnsmasq-dns" Dec 15 05:56:55 crc kubenswrapper[4747]: E1215 05:56:55.255536 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fee4e87f-8126-4c7e-bbb1-898b81c4f9b0" containerName="dnsmasq-dns" Dec 15 05:56:55 crc kubenswrapper[4747]: I1215 05:56:55.255542 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="fee4e87f-8126-4c7e-bbb1-898b81c4f9b0" containerName="dnsmasq-dns" Dec 15 05:56:55 crc kubenswrapper[4747]: I1215 05:56:55.255734 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="fee4e87f-8126-4c7e-bbb1-898b81c4f9b0" containerName="dnsmasq-dns" Dec 15 05:56:55 crc kubenswrapper[4747]: I1215 05:56:55.255757 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="f54d034b-85e7-412e-9cdf-14bee08e155d" containerName="dnsmasq-dns" Dec 15 05:56:55 crc kubenswrapper[4747]: I1215 05:56:55.256435 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kps28" Dec 15 05:56:55 crc kubenswrapper[4747]: I1215 05:56:55.258214 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 15 05:56:55 crc kubenswrapper[4747]: I1215 05:56:55.258643 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bfv8q" Dec 15 05:56:55 crc kubenswrapper[4747]: I1215 05:56:55.258798 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 15 05:56:55 crc kubenswrapper[4747]: I1215 05:56:55.258841 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 15 05:56:55 crc kubenswrapper[4747]: I1215 05:56:55.266213 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kps28"] Dec 15 05:56:55 crc kubenswrapper[4747]: I1215 05:56:55.349542 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d11ad7a8-e6c0-497a-8a1a-0b82be444a86-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kps28\" (UID: \"d11ad7a8-e6c0-497a-8a1a-0b82be444a86\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kps28" Dec 15 05:56:55 crc kubenswrapper[4747]: I1215 05:56:55.349683 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d11ad7a8-e6c0-497a-8a1a-0b82be444a86-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kps28\" (UID: \"d11ad7a8-e6c0-497a-8a1a-0b82be444a86\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kps28" Dec 15 05:56:55 crc kubenswrapper[4747]: I1215 05:56:55.349775 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d11ad7a8-e6c0-497a-8a1a-0b82be444a86-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kps28\" (UID: \"d11ad7a8-e6c0-497a-8a1a-0b82be444a86\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kps28" Dec 15 05:56:55 crc kubenswrapper[4747]: I1215 05:56:55.349884 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v74hk\" (UniqueName: \"kubernetes.io/projected/d11ad7a8-e6c0-497a-8a1a-0b82be444a86-kube-api-access-v74hk\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kps28\" (UID: \"d11ad7a8-e6c0-497a-8a1a-0b82be444a86\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kps28" Dec 15 05:56:55 crc kubenswrapper[4747]: I1215 05:56:55.451395 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d11ad7a8-e6c0-497a-8a1a-0b82be444a86-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kps28\" (UID: \"d11ad7a8-e6c0-497a-8a1a-0b82be444a86\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kps28" Dec 15 05:56:55 crc kubenswrapper[4747]: I1215 05:56:55.451467 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d11ad7a8-e6c0-497a-8a1a-0b82be444a86-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kps28\" (UID: \"d11ad7a8-e6c0-497a-8a1a-0b82be444a86\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kps28" Dec 15 05:56:55 crc kubenswrapper[4747]: I1215 05:56:55.451517 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d11ad7a8-e6c0-497a-8a1a-0b82be444a86-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kps28\" (UID: \"d11ad7a8-e6c0-497a-8a1a-0b82be444a86\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kps28" Dec 15 05:56:55 crc kubenswrapper[4747]: I1215 05:56:55.451578 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v74hk\" (UniqueName: \"kubernetes.io/projected/d11ad7a8-e6c0-497a-8a1a-0b82be444a86-kube-api-access-v74hk\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kps28\" (UID: \"d11ad7a8-e6c0-497a-8a1a-0b82be444a86\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kps28" Dec 15 05:56:55 crc kubenswrapper[4747]: I1215 05:56:55.459400 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d11ad7a8-e6c0-497a-8a1a-0b82be444a86-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kps28\" (UID: \"d11ad7a8-e6c0-497a-8a1a-0b82be444a86\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kps28" Dec 15 05:56:55 crc kubenswrapper[4747]: I1215 05:56:55.459467 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d11ad7a8-e6c0-497a-8a1a-0b82be444a86-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kps28\" (UID: \"d11ad7a8-e6c0-497a-8a1a-0b82be444a86\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kps28" Dec 15 05:56:55 crc kubenswrapper[4747]: I1215 05:56:55.459543 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d11ad7a8-e6c0-497a-8a1a-0b82be444a86-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kps28\" (UID: \"d11ad7a8-e6c0-497a-8a1a-0b82be444a86\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kps28" Dec 15 05:56:55 crc kubenswrapper[4747]: I1215 05:56:55.466889 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v74hk\" (UniqueName: \"kubernetes.io/projected/d11ad7a8-e6c0-497a-8a1a-0b82be444a86-kube-api-access-v74hk\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kps28\" (UID: \"d11ad7a8-e6c0-497a-8a1a-0b82be444a86\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kps28" Dec 15 05:56:55 crc kubenswrapper[4747]: I1215 05:56:55.570672 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kps28" Dec 15 05:56:56 crc kubenswrapper[4747]: I1215 05:56:56.164612 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kps28"] Dec 15 05:56:56 crc kubenswrapper[4747]: I1215 05:56:56.394843 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kps28" event={"ID":"d11ad7a8-e6c0-497a-8a1a-0b82be444a86","Type":"ContainerStarted","Data":"7ba7d04409c23c525dd7769b892efb3c9548d1d12f6459f951fb8d2a7c50f89c"} Dec 15 05:56:56 crc kubenswrapper[4747]: I1215 05:56:56.396760 4747 generic.go:334] "Generic (PLEG): container finished" podID="7a527363-fdfb-4bbe-a50e-41923c5cc78c" containerID="768177cd2805e18620b632f28bca28a5d4a857887e4b3679cc777325b7d87302" exitCode=0 Dec 15 05:56:56 crc kubenswrapper[4747]: I1215 05:56:56.396817 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7a527363-fdfb-4bbe-a50e-41923c5cc78c","Type":"ContainerDied","Data":"768177cd2805e18620b632f28bca28a5d4a857887e4b3679cc777325b7d87302"} Dec 15 05:56:57 crc kubenswrapper[4747]: I1215 05:56:57.415023 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7a527363-fdfb-4bbe-a50e-41923c5cc78c","Type":"ContainerStarted","Data":"a6c64c9c0be40101641f19a44c13f3f50f4b44669c793c0bfb185d16838bcc42"} Dec 15 05:56:57 crc kubenswrapper[4747]: I1215 05:56:57.416011 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 15 05:56:57 crc kubenswrapper[4747]: I1215 05:56:57.443538 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=35.44352276 podStartE2EDuration="35.44352276s" podCreationTimestamp="2025-12-15 05:56:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:56:57.436955197 +0000 UTC m=+1181.133467134" watchObservedRunningTime="2025-12-15 05:56:57.44352276 +0000 UTC m=+1181.140034677" Dec 15 05:56:58 crc kubenswrapper[4747]: I1215 05:56:58.432634 4747 generic.go:334] "Generic (PLEG): container finished" podID="42363452-e04c-462e-8341-6f3f99392357" containerID="1c8ea96f9339a971f96546315c1cbb628285e68520446c06fc47281b35196e54" exitCode=0 Dec 15 05:56:58 crc kubenswrapper[4747]: I1215 05:56:58.432711 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"42363452-e04c-462e-8341-6f3f99392357","Type":"ContainerDied","Data":"1c8ea96f9339a971f96546315c1cbb628285e68520446c06fc47281b35196e54"} Dec 15 05:57:00 crc kubenswrapper[4747]: I1215 05:57:00.459544 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"42363452-e04c-462e-8341-6f3f99392357","Type":"ContainerStarted","Data":"39b54a000c6a9832d42f311895bddcb7ac6402635159047b57cc22d1d730bdf3"} Dec 15 05:57:00 crc kubenswrapper[4747]: I1215 05:57:00.460490 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:57:00 crc kubenswrapper[4747]: I1215 05:57:00.491411 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.491396117 podStartE2EDuration="37.491396117s" podCreationTimestamp="2025-12-15 05:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 05:57:00.479892387 +0000 UTC m=+1184.176404304" watchObservedRunningTime="2025-12-15 05:57:00.491396117 +0000 UTC m=+1184.187908034" Dec 15 05:57:09 crc kubenswrapper[4747]: I1215 05:57:09.550397 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kps28" event={"ID":"d11ad7a8-e6c0-497a-8a1a-0b82be444a86","Type":"ContainerStarted","Data":"9e4b7e21c6755c0fa1b9355add47aa647c58fd011819ed72e5c3a774dc3df59e"} Dec 15 05:57:09 crc kubenswrapper[4747]: I1215 05:57:09.588379 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kps28" podStartSLOduration=1.711781882 podStartE2EDuration="14.588366027s" podCreationTimestamp="2025-12-15 05:56:55 +0000 UTC" firstStartedPulling="2025-12-15 05:56:56.161125483 +0000 UTC m=+1179.857637400" lastFinishedPulling="2025-12-15 05:57:09.037709628 +0000 UTC m=+1192.734221545" observedRunningTime="2025-12-15 05:57:09.586245432 +0000 UTC m=+1193.282757348" watchObservedRunningTime="2025-12-15 05:57:09.588366027 +0000 UTC m=+1193.284877944" Dec 15 05:57:12 crc kubenswrapper[4747]: I1215 05:57:12.484242 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 15 05:57:13 crc kubenswrapper[4747]: I1215 05:57:13.537121 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 15 05:57:20 crc kubenswrapper[4747]: I1215 05:57:20.650896 4747 generic.go:334] "Generic (PLEG): container finished" podID="d11ad7a8-e6c0-497a-8a1a-0b82be444a86" containerID="9e4b7e21c6755c0fa1b9355add47aa647c58fd011819ed72e5c3a774dc3df59e" exitCode=0 Dec 15 05:57:20 crc kubenswrapper[4747]: I1215 05:57:20.650957 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kps28" event={"ID":"d11ad7a8-e6c0-497a-8a1a-0b82be444a86","Type":"ContainerDied","Data":"9e4b7e21c6755c0fa1b9355add47aa647c58fd011819ed72e5c3a774dc3df59e"} Dec 15 05:57:22 crc kubenswrapper[4747]: I1215 05:57:22.014860 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kps28" Dec 15 05:57:22 crc kubenswrapper[4747]: I1215 05:57:22.073089 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d11ad7a8-e6c0-497a-8a1a-0b82be444a86-ssh-key\") pod \"d11ad7a8-e6c0-497a-8a1a-0b82be444a86\" (UID: \"d11ad7a8-e6c0-497a-8a1a-0b82be444a86\") " Dec 15 05:57:22 crc kubenswrapper[4747]: I1215 05:57:22.073149 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v74hk\" (UniqueName: \"kubernetes.io/projected/d11ad7a8-e6c0-497a-8a1a-0b82be444a86-kube-api-access-v74hk\") pod \"d11ad7a8-e6c0-497a-8a1a-0b82be444a86\" (UID: \"d11ad7a8-e6c0-497a-8a1a-0b82be444a86\") " Dec 15 05:57:22 crc kubenswrapper[4747]: I1215 05:57:22.073177 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d11ad7a8-e6c0-497a-8a1a-0b82be444a86-inventory\") pod \"d11ad7a8-e6c0-497a-8a1a-0b82be444a86\" (UID: \"d11ad7a8-e6c0-497a-8a1a-0b82be444a86\") " Dec 15 05:57:22 crc kubenswrapper[4747]: I1215 05:57:22.073254 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d11ad7a8-e6c0-497a-8a1a-0b82be444a86-repo-setup-combined-ca-bundle\") pod \"d11ad7a8-e6c0-497a-8a1a-0b82be444a86\" (UID: \"d11ad7a8-e6c0-497a-8a1a-0b82be444a86\") " Dec 15 05:57:22 crc kubenswrapper[4747]: I1215 05:57:22.078775 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d11ad7a8-e6c0-497a-8a1a-0b82be444a86-kube-api-access-v74hk" (OuterVolumeSpecName: "kube-api-access-v74hk") pod "d11ad7a8-e6c0-497a-8a1a-0b82be444a86" (UID: "d11ad7a8-e6c0-497a-8a1a-0b82be444a86"). InnerVolumeSpecName "kube-api-access-v74hk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:57:22 crc kubenswrapper[4747]: I1215 05:57:22.079214 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d11ad7a8-e6c0-497a-8a1a-0b82be444a86-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "d11ad7a8-e6c0-497a-8a1a-0b82be444a86" (UID: "d11ad7a8-e6c0-497a-8a1a-0b82be444a86"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:57:22 crc kubenswrapper[4747]: I1215 05:57:22.095563 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d11ad7a8-e6c0-497a-8a1a-0b82be444a86-inventory" (OuterVolumeSpecName: "inventory") pod "d11ad7a8-e6c0-497a-8a1a-0b82be444a86" (UID: "d11ad7a8-e6c0-497a-8a1a-0b82be444a86"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:57:22 crc kubenswrapper[4747]: I1215 05:57:22.096559 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d11ad7a8-e6c0-497a-8a1a-0b82be444a86-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d11ad7a8-e6c0-497a-8a1a-0b82be444a86" (UID: "d11ad7a8-e6c0-497a-8a1a-0b82be444a86"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:57:22 crc kubenswrapper[4747]: I1215 05:57:22.175858 4747 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d11ad7a8-e6c0-497a-8a1a-0b82be444a86-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 05:57:22 crc kubenswrapper[4747]: I1215 05:57:22.175888 4747 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d11ad7a8-e6c0-497a-8a1a-0b82be444a86-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 15 05:57:22 crc kubenswrapper[4747]: I1215 05:57:22.175898 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v74hk\" (UniqueName: \"kubernetes.io/projected/d11ad7a8-e6c0-497a-8a1a-0b82be444a86-kube-api-access-v74hk\") on node \"crc\" DevicePath \"\"" Dec 15 05:57:22 crc kubenswrapper[4747]: I1215 05:57:22.175908 4747 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d11ad7a8-e6c0-497a-8a1a-0b82be444a86-inventory\") on node \"crc\" DevicePath \"\"" Dec 15 05:57:22 crc kubenswrapper[4747]: I1215 05:57:22.674657 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kps28" event={"ID":"d11ad7a8-e6c0-497a-8a1a-0b82be444a86","Type":"ContainerDied","Data":"7ba7d04409c23c525dd7769b892efb3c9548d1d12f6459f951fb8d2a7c50f89c"} Dec 15 05:57:22 crc kubenswrapper[4747]: I1215 05:57:22.674734 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ba7d04409c23c525dd7769b892efb3c9548d1d12f6459f951fb8d2a7c50f89c" Dec 15 05:57:22 crc kubenswrapper[4747]: I1215 05:57:22.674840 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kps28" Dec 15 05:57:22 crc kubenswrapper[4747]: I1215 05:57:22.792738 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-nngr7"] Dec 15 05:57:22 crc kubenswrapper[4747]: E1215 05:57:22.793498 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d11ad7a8-e6c0-497a-8a1a-0b82be444a86" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 15 05:57:22 crc kubenswrapper[4747]: I1215 05:57:22.793526 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="d11ad7a8-e6c0-497a-8a1a-0b82be444a86" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 15 05:57:22 crc kubenswrapper[4747]: I1215 05:57:22.793712 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="d11ad7a8-e6c0-497a-8a1a-0b82be444a86" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 15 05:57:22 crc kubenswrapper[4747]: I1215 05:57:22.794651 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nngr7" Dec 15 05:57:22 crc kubenswrapper[4747]: I1215 05:57:22.798494 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bfv8q" Dec 15 05:57:22 crc kubenswrapper[4747]: I1215 05:57:22.815633 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 15 05:57:22 crc kubenswrapper[4747]: I1215 05:57:22.815960 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 15 05:57:22 crc kubenswrapper[4747]: I1215 05:57:22.816449 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 15 05:57:22 crc kubenswrapper[4747]: I1215 05:57:22.833219 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-nngr7"] Dec 15 05:57:22 crc kubenswrapper[4747]: I1215 05:57:22.989982 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjq9t\" (UniqueName: \"kubernetes.io/projected/9a1bff2c-a33c-4816-998e-243617f6e473-kube-api-access-fjq9t\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-nngr7\" (UID: \"9a1bff2c-a33c-4816-998e-243617f6e473\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nngr7" Dec 15 05:57:22 crc kubenswrapper[4747]: I1215 05:57:22.990089 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9a1bff2c-a33c-4816-998e-243617f6e473-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-nngr7\" (UID: \"9a1bff2c-a33c-4816-998e-243617f6e473\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nngr7" Dec 15 05:57:22 crc kubenswrapper[4747]: I1215 05:57:22.990133 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a1bff2c-a33c-4816-998e-243617f6e473-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-nngr7\" (UID: \"9a1bff2c-a33c-4816-998e-243617f6e473\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nngr7" Dec 15 05:57:23 crc kubenswrapper[4747]: I1215 05:57:23.091863 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjq9t\" (UniqueName: \"kubernetes.io/projected/9a1bff2c-a33c-4816-998e-243617f6e473-kube-api-access-fjq9t\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-nngr7\" (UID: \"9a1bff2c-a33c-4816-998e-243617f6e473\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nngr7" Dec 15 05:57:23 crc kubenswrapper[4747]: I1215 05:57:23.091975 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9a1bff2c-a33c-4816-998e-243617f6e473-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-nngr7\" (UID: \"9a1bff2c-a33c-4816-998e-243617f6e473\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nngr7" Dec 15 05:57:23 crc kubenswrapper[4747]: I1215 05:57:23.092030 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a1bff2c-a33c-4816-998e-243617f6e473-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-nngr7\" (UID: \"9a1bff2c-a33c-4816-998e-243617f6e473\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nngr7" Dec 15 05:57:23 crc kubenswrapper[4747]: I1215 05:57:23.098256 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9a1bff2c-a33c-4816-998e-243617f6e473-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-nngr7\" (UID: \"9a1bff2c-a33c-4816-998e-243617f6e473\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nngr7" Dec 15 05:57:23 crc kubenswrapper[4747]: I1215 05:57:23.100081 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a1bff2c-a33c-4816-998e-243617f6e473-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-nngr7\" (UID: \"9a1bff2c-a33c-4816-998e-243617f6e473\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nngr7" Dec 15 05:57:23 crc kubenswrapper[4747]: I1215 05:57:23.108908 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjq9t\" (UniqueName: \"kubernetes.io/projected/9a1bff2c-a33c-4816-998e-243617f6e473-kube-api-access-fjq9t\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-nngr7\" (UID: \"9a1bff2c-a33c-4816-998e-243617f6e473\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nngr7" Dec 15 05:57:23 crc kubenswrapper[4747]: I1215 05:57:23.115121 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nngr7" Dec 15 05:57:23 crc kubenswrapper[4747]: W1215 05:57:23.597453 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a1bff2c_a33c_4816_998e_243617f6e473.slice/crio-c92d233d24d31e4385b49115bd2162e39958d6a45ae1c83bdf9968924cdeb606 WatchSource:0}: Error finding container c92d233d24d31e4385b49115bd2162e39958d6a45ae1c83bdf9968924cdeb606: Status 404 returned error can't find the container with id c92d233d24d31e4385b49115bd2162e39958d6a45ae1c83bdf9968924cdeb606 Dec 15 05:57:23 crc kubenswrapper[4747]: I1215 05:57:23.598359 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-nngr7"] Dec 15 05:57:23 crc kubenswrapper[4747]: I1215 05:57:23.683109 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nngr7" event={"ID":"9a1bff2c-a33c-4816-998e-243617f6e473","Type":"ContainerStarted","Data":"c92d233d24d31e4385b49115bd2162e39958d6a45ae1c83bdf9968924cdeb606"} Dec 15 05:57:24 crc kubenswrapper[4747]: I1215 05:57:24.693837 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nngr7" event={"ID":"9a1bff2c-a33c-4816-998e-243617f6e473","Type":"ContainerStarted","Data":"4320b4d90f505f247355d441c93064f17b5f66ccf08146b195b30cb271b72cff"} Dec 15 05:57:24 crc kubenswrapper[4747]: I1215 05:57:24.717746 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nngr7" podStartSLOduration=2.035041878 podStartE2EDuration="2.717720208s" podCreationTimestamp="2025-12-15 05:57:22 +0000 UTC" firstStartedPulling="2025-12-15 05:57:23.600935051 +0000 UTC m=+1207.297446968" lastFinishedPulling="2025-12-15 05:57:24.283613381 +0000 UTC m=+1207.980125298" observedRunningTime="2025-12-15 05:57:24.715200682 +0000 UTC m=+1208.411712599" watchObservedRunningTime="2025-12-15 05:57:24.717720208 +0000 UTC m=+1208.414232125" Dec 15 05:57:26 crc kubenswrapper[4747]: I1215 05:57:26.726610 4747 generic.go:334] "Generic (PLEG): container finished" podID="9a1bff2c-a33c-4816-998e-243617f6e473" containerID="4320b4d90f505f247355d441c93064f17b5f66ccf08146b195b30cb271b72cff" exitCode=0 Dec 15 05:57:26 crc kubenswrapper[4747]: I1215 05:57:26.726901 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nngr7" event={"ID":"9a1bff2c-a33c-4816-998e-243617f6e473","Type":"ContainerDied","Data":"4320b4d90f505f247355d441c93064f17b5f66ccf08146b195b30cb271b72cff"} Dec 15 05:57:28 crc kubenswrapper[4747]: I1215 05:57:28.094476 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nngr7" Dec 15 05:57:28 crc kubenswrapper[4747]: I1215 05:57:28.195040 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjq9t\" (UniqueName: \"kubernetes.io/projected/9a1bff2c-a33c-4816-998e-243617f6e473-kube-api-access-fjq9t\") pod \"9a1bff2c-a33c-4816-998e-243617f6e473\" (UID: \"9a1bff2c-a33c-4816-998e-243617f6e473\") " Dec 15 05:57:28 crc kubenswrapper[4747]: I1215 05:57:28.195177 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a1bff2c-a33c-4816-998e-243617f6e473-inventory\") pod \"9a1bff2c-a33c-4816-998e-243617f6e473\" (UID: \"9a1bff2c-a33c-4816-998e-243617f6e473\") " Dec 15 05:57:28 crc kubenswrapper[4747]: I1215 05:57:28.195553 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9a1bff2c-a33c-4816-998e-243617f6e473-ssh-key\") pod \"9a1bff2c-a33c-4816-998e-243617f6e473\" (UID: \"9a1bff2c-a33c-4816-998e-243617f6e473\") " Dec 15 05:57:28 crc kubenswrapper[4747]: I1215 05:57:28.202462 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a1bff2c-a33c-4816-998e-243617f6e473-kube-api-access-fjq9t" (OuterVolumeSpecName: "kube-api-access-fjq9t") pod "9a1bff2c-a33c-4816-998e-243617f6e473" (UID: "9a1bff2c-a33c-4816-998e-243617f6e473"). InnerVolumeSpecName "kube-api-access-fjq9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 05:57:28 crc kubenswrapper[4747]: I1215 05:57:28.217375 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a1bff2c-a33c-4816-998e-243617f6e473-inventory" (OuterVolumeSpecName: "inventory") pod "9a1bff2c-a33c-4816-998e-243617f6e473" (UID: "9a1bff2c-a33c-4816-998e-243617f6e473"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:57:28 crc kubenswrapper[4747]: I1215 05:57:28.219201 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a1bff2c-a33c-4816-998e-243617f6e473-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9a1bff2c-a33c-4816-998e-243617f6e473" (UID: "9a1bff2c-a33c-4816-998e-243617f6e473"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 05:57:28 crc kubenswrapper[4747]: I1215 05:57:28.298864 4747 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9a1bff2c-a33c-4816-998e-243617f6e473-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 15 05:57:28 crc kubenswrapper[4747]: I1215 05:57:28.298905 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjq9t\" (UniqueName: \"kubernetes.io/projected/9a1bff2c-a33c-4816-998e-243617f6e473-kube-api-access-fjq9t\") on node \"crc\" DevicePath \"\"" Dec 15 05:57:28 crc kubenswrapper[4747]: I1215 05:57:28.298919 4747 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a1bff2c-a33c-4816-998e-243617f6e473-inventory\") on node \"crc\" DevicePath \"\"" Dec 15 05:57:28 crc kubenswrapper[4747]: I1215 05:57:28.747419 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nngr7" event={"ID":"9a1bff2c-a33c-4816-998e-243617f6e473","Type":"ContainerDied","Data":"c92d233d24d31e4385b49115bd2162e39958d6a45ae1c83bdf9968924cdeb606"} Dec 15 05:57:28 crc kubenswrapper[4747]: I1215 05:57:28.747464 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c92d233d24d31e4385b49115bd2162e39958d6a45ae1c83bdf9968924cdeb606" Dec 15 05:57:28 crc kubenswrapper[4747]: I1215 05:57:28.747543 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nngr7" Dec 15 05:57:28 crc kubenswrapper[4747]: I1215 05:57:28.865151 4747 patch_prober.go:28] interesting pod/machine-config-daemon-nldtn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 15 05:57:28 crc kubenswrapper[4747]: I1215 05:57:28.865225 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 15 05:57:28 crc kubenswrapper[4747]: I1215 05:57:28.895520 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xl6qh"] Dec 15 05:57:28 crc kubenswrapper[4747]: E1215 05:57:28.896184 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a1bff2c-a33c-4816-998e-243617f6e473" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 15 05:57:28 crc kubenswrapper[4747]: I1215 05:57:28.896210 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a1bff2c-a33c-4816-998e-243617f6e473" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 15 05:57:28 crc kubenswrapper[4747]: I1215 05:57:28.896490 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a1bff2c-a33c-4816-998e-243617f6e473" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 15 05:57:28 crc kubenswrapper[4747]: I1215 05:57:28.897430 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xl6qh" Dec 15 05:57:28 crc kubenswrapper[4747]: I1215 05:57:28.899874 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 15 05:57:28 crc kubenswrapper[4747]: I1215 05:57:28.900021 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 15 05:57:28 crc kubenswrapper[4747]: I1215 05:57:28.901058 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 15 05:57:28 crc kubenswrapper[4747]: I1215 05:57:28.902881 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xl6qh"] Dec 15 05:57:28 crc kubenswrapper[4747]: I1215 05:57:28.906370 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bfv8q" Dec 15 05:57:28 crc kubenswrapper[4747]: I1215 05:57:28.916592 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2af42599-0cda-45de-b1fe-9bed5ad6f035-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xl6qh\" (UID: \"2af42599-0cda-45de-b1fe-9bed5ad6f035\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xl6qh" Dec 15 05:57:28 crc kubenswrapper[4747]: I1215 05:57:28.916742 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2af42599-0cda-45de-b1fe-9bed5ad6f035-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xl6qh\" (UID: \"2af42599-0cda-45de-b1fe-9bed5ad6f035\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xl6qh" Dec 15 05:57:28 crc kubenswrapper[4747]: I1215 05:57:28.916906 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2af42599-0cda-45de-b1fe-9bed5ad6f035-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xl6qh\" (UID: \"2af42599-0cda-45de-b1fe-9bed5ad6f035\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xl6qh" Dec 15 05:57:28 crc kubenswrapper[4747]: I1215 05:57:28.917050 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clnjs\" (UniqueName: \"kubernetes.io/projected/2af42599-0cda-45de-b1fe-9bed5ad6f035-kube-api-access-clnjs\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xl6qh\" (UID: \"2af42599-0cda-45de-b1fe-9bed5ad6f035\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xl6qh" Dec 15 05:57:29 crc kubenswrapper[4747]: I1215 05:57:29.019606 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2af42599-0cda-45de-b1fe-9bed5ad6f035-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xl6qh\" (UID: \"2af42599-0cda-45de-b1fe-9bed5ad6f035\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xl6qh" Dec 15 05:57:29 crc kubenswrapper[4747]: I1215 05:57:29.019657 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2af42599-0cda-45de-b1fe-9bed5ad6f035-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xl6qh\" (UID: \"2af42599-0cda-45de-b1fe-9bed5ad6f035\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xl6qh" Dec 15 05:57:29 crc kubenswrapper[4747]: I1215 05:57:29.019689 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2af42599-0cda-45de-b1fe-9bed5ad6f035-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xl6qh\" (UID: \"2af42599-0cda-45de-b1fe-9bed5ad6f035\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xl6qh" Dec 15 05:57:29 crc kubenswrapper[4747]: I1215 05:57:29.019717 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clnjs\" (UniqueName: \"kubernetes.io/projected/2af42599-0cda-45de-b1fe-9bed5ad6f035-kube-api-access-clnjs\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xl6qh\" (UID: \"2af42599-0cda-45de-b1fe-9bed5ad6f035\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xl6qh" Dec 15 05:57:29 crc kubenswrapper[4747]: I1215 05:57:29.025965 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2af42599-0cda-45de-b1fe-9bed5ad6f035-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xl6qh\" (UID: \"2af42599-0cda-45de-b1fe-9bed5ad6f035\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xl6qh" Dec 15 05:57:29 crc kubenswrapper[4747]: I1215 05:57:29.026391 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2af42599-0cda-45de-b1fe-9bed5ad6f035-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xl6qh\" (UID: \"2af42599-0cda-45de-b1fe-9bed5ad6f035\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xl6qh" Dec 15 05:57:29 crc kubenswrapper[4747]: I1215 05:57:29.027016 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2af42599-0cda-45de-b1fe-9bed5ad6f035-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xl6qh\" (UID: \"2af42599-0cda-45de-b1fe-9bed5ad6f035\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xl6qh" Dec 15 05:57:29 crc kubenswrapper[4747]: I1215 05:57:29.037336 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clnjs\" (UniqueName: \"kubernetes.io/projected/2af42599-0cda-45de-b1fe-9bed5ad6f035-kube-api-access-clnjs\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xl6qh\" (UID: \"2af42599-0cda-45de-b1fe-9bed5ad6f035\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xl6qh" Dec 15 05:57:29 crc kubenswrapper[4747]: I1215 05:57:29.216188 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xl6qh" Dec 15 05:57:29 crc kubenswrapper[4747]: I1215 05:57:29.708601 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xl6qh"] Dec 15 05:57:29 crc kubenswrapper[4747]: I1215 05:57:29.756318 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xl6qh" event={"ID":"2af42599-0cda-45de-b1fe-9bed5ad6f035","Type":"ContainerStarted","Data":"241ee58d159e5956a0ebf761eb7d818a3336c5f5f3f5c096fad25ddb1dfd40e7"} Dec 15 05:57:30 crc kubenswrapper[4747]: I1215 05:57:30.771893 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xl6qh" event={"ID":"2af42599-0cda-45de-b1fe-9bed5ad6f035","Type":"ContainerStarted","Data":"e27e100aa9563fa8a0446c1718e363fd2274ff80dce249ac89527c9ccbfef931"} Dec 15 05:57:31 crc kubenswrapper[4747]: I1215 05:57:31.436641 4747 scope.go:117] "RemoveContainer" containerID="0e878fe8d8975fe2f99daff6abc119591b283a5216a74fb7c9901edd89a0d656" Dec 15 05:57:31 crc kubenswrapper[4747]: I1215 05:57:31.473391 4747 scope.go:117] "RemoveContainer" containerID="0a29c784c81fc753a180f35eb4973ee2dd62fc90b65276d06981bf99cd60274d" Dec 15 05:57:31 crc kubenswrapper[4747]: I1215 05:57:31.506690 4747 scope.go:117] "RemoveContainer" containerID="7e5ab965190fd6a4bb7f0f426cc015cca7ab231dca777f9c018ffe750cf11f57" Dec 15 05:57:58 crc kubenswrapper[4747]: I1215 05:57:58.865284 4747 patch_prober.go:28] interesting pod/machine-config-daemon-nldtn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 15 05:57:58 crc kubenswrapper[4747]: I1215 05:57:58.866140 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 15 05:58:28 crc kubenswrapper[4747]: I1215 05:58:28.865571 4747 patch_prober.go:28] interesting pod/machine-config-daemon-nldtn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 15 05:58:28 crc kubenswrapper[4747]: I1215 05:58:28.866004 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 15 05:58:28 crc kubenswrapper[4747]: I1215 05:58:28.866060 4747 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" Dec 15 05:58:28 crc kubenswrapper[4747]: I1215 05:58:28.866609 4747 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"24e2ae64f4e610798e09d68f965f566dada71476cc8359af941aa647f9585c49"} pod="openshift-machine-config-operator/machine-config-daemon-nldtn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 15 05:58:28 crc kubenswrapper[4747]: I1215 05:58:28.866720 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" containerID="cri-o://24e2ae64f4e610798e09d68f965f566dada71476cc8359af941aa647f9585c49" gracePeriod=600 Dec 15 05:58:29 crc kubenswrapper[4747]: I1215 05:58:29.341463 4747 generic.go:334] "Generic (PLEG): container finished" podID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerID="24e2ae64f4e610798e09d68f965f566dada71476cc8359af941aa647f9585c49" exitCode=0 Dec 15 05:58:29 crc kubenswrapper[4747]: I1215 05:58:29.341534 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" event={"ID":"1d50e5c9-7ce9-40c0-b942-01031654d27c","Type":"ContainerDied","Data":"24e2ae64f4e610798e09d68f965f566dada71476cc8359af941aa647f9585c49"} Dec 15 05:58:29 crc kubenswrapper[4747]: I1215 05:58:29.341758 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" event={"ID":"1d50e5c9-7ce9-40c0-b942-01031654d27c","Type":"ContainerStarted","Data":"080353cc0a0655a6ae55c2b56c2536dc00fa2c3b98890884d8fe921591d73c15"} Dec 15 05:58:29 crc kubenswrapper[4747]: I1215 05:58:29.341788 4747 scope.go:117] "RemoveContainer" containerID="90f12c1fab813a5975dd1bb7980ef75e3315dc4c893a83d1630f8dfbea3891d6" Dec 15 05:58:29 crc kubenswrapper[4747]: I1215 05:58:29.362972 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xl6qh" podStartSLOduration=60.777102393 podStartE2EDuration="1m1.362961632s" podCreationTimestamp="2025-12-15 05:57:28 +0000 UTC" firstStartedPulling="2025-12-15 05:57:29.71503583 +0000 UTC m=+1213.411547747" lastFinishedPulling="2025-12-15 05:57:30.300895069 +0000 UTC m=+1213.997406986" observedRunningTime="2025-12-15 05:57:30.791387417 +0000 UTC m=+1214.487899334" watchObservedRunningTime="2025-12-15 05:58:29.362961632 +0000 UTC m=+1273.059473549" Dec 15 05:58:31 crc kubenswrapper[4747]: I1215 05:58:31.623503 4747 scope.go:117] "RemoveContainer" containerID="5035b1e88796f5b0359f747dccec2f3a4fa91d321e342ca56dc644e782351e1f" Dec 15 05:58:31 crc kubenswrapper[4747]: I1215 05:58:31.667688 4747 scope.go:117] "RemoveContainer" containerID="7e8bc6047523f041b8000fd62cbc52aa9d9a6f104eb0eeaea7fe0c442e8d0a4b" Dec 15 05:58:31 crc kubenswrapper[4747]: I1215 05:58:31.694623 4747 scope.go:117] "RemoveContainer" containerID="56e02d43c553bcb63c84e7420195e07acf1694be8e3a4f5410fb5bd736df3cbb" Dec 15 06:00:00 crc kubenswrapper[4747]: I1215 06:00:00.158027 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29429640-g62tq"] Dec 15 06:00:00 crc kubenswrapper[4747]: I1215 06:00:00.159681 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29429640-g62tq" Dec 15 06:00:00 crc kubenswrapper[4747]: I1215 06:00:00.161691 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 15 06:00:00 crc kubenswrapper[4747]: I1215 06:00:00.161752 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 15 06:00:00 crc kubenswrapper[4747]: I1215 06:00:00.166154 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm8wl\" (UniqueName: \"kubernetes.io/projected/ccd36556-00fa-447e-886e-0759f7cb8283-kube-api-access-hm8wl\") pod \"collect-profiles-29429640-g62tq\" (UID: \"ccd36556-00fa-447e-886e-0759f7cb8283\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29429640-g62tq" Dec 15 06:00:00 crc kubenswrapper[4747]: I1215 06:00:00.166524 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ccd36556-00fa-447e-886e-0759f7cb8283-config-volume\") pod \"collect-profiles-29429640-g62tq\" (UID: \"ccd36556-00fa-447e-886e-0759f7cb8283\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29429640-g62tq" Dec 15 06:00:00 crc kubenswrapper[4747]: I1215 06:00:00.166668 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ccd36556-00fa-447e-886e-0759f7cb8283-secret-volume\") pod \"collect-profiles-29429640-g62tq\" (UID: \"ccd36556-00fa-447e-886e-0759f7cb8283\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29429640-g62tq" Dec 15 06:00:00 crc kubenswrapper[4747]: I1215 06:00:00.171962 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29429640-g62tq"] Dec 15 06:00:00 crc kubenswrapper[4747]: I1215 06:00:00.269229 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ccd36556-00fa-447e-886e-0759f7cb8283-config-volume\") pod \"collect-profiles-29429640-g62tq\" (UID: \"ccd36556-00fa-447e-886e-0759f7cb8283\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29429640-g62tq" Dec 15 06:00:00 crc kubenswrapper[4747]: I1215 06:00:00.269304 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ccd36556-00fa-447e-886e-0759f7cb8283-secret-volume\") pod \"collect-profiles-29429640-g62tq\" (UID: \"ccd36556-00fa-447e-886e-0759f7cb8283\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29429640-g62tq" Dec 15 06:00:00 crc kubenswrapper[4747]: I1215 06:00:00.269405 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm8wl\" (UniqueName: \"kubernetes.io/projected/ccd36556-00fa-447e-886e-0759f7cb8283-kube-api-access-hm8wl\") pod \"collect-profiles-29429640-g62tq\" (UID: \"ccd36556-00fa-447e-886e-0759f7cb8283\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29429640-g62tq" Dec 15 06:00:00 crc kubenswrapper[4747]: I1215 06:00:00.270448 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ccd36556-00fa-447e-886e-0759f7cb8283-config-volume\") pod \"collect-profiles-29429640-g62tq\" (UID: \"ccd36556-00fa-447e-886e-0759f7cb8283\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29429640-g62tq" Dec 15 06:00:00 crc kubenswrapper[4747]: I1215 06:00:00.278255 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ccd36556-00fa-447e-886e-0759f7cb8283-secret-volume\") pod \"collect-profiles-29429640-g62tq\" (UID: \"ccd36556-00fa-447e-886e-0759f7cb8283\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29429640-g62tq" Dec 15 06:00:00 crc kubenswrapper[4747]: I1215 06:00:00.286336 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm8wl\" (UniqueName: \"kubernetes.io/projected/ccd36556-00fa-447e-886e-0759f7cb8283-kube-api-access-hm8wl\") pod \"collect-profiles-29429640-g62tq\" (UID: \"ccd36556-00fa-447e-886e-0759f7cb8283\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29429640-g62tq" Dec 15 06:00:00 crc kubenswrapper[4747]: I1215 06:00:00.478042 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29429640-g62tq" Dec 15 06:00:00 crc kubenswrapper[4747]: I1215 06:00:00.893818 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29429640-g62tq"] Dec 15 06:00:01 crc kubenswrapper[4747]: I1215 06:00:01.320866 4747 generic.go:334] "Generic (PLEG): container finished" podID="ccd36556-00fa-447e-886e-0759f7cb8283" containerID="d6c7fb660ed81587e950dadcdccf39ffd092d201870b6b0e71a79b6f8dc96d80" exitCode=0 Dec 15 06:00:01 crc kubenswrapper[4747]: I1215 06:00:01.320974 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29429640-g62tq" event={"ID":"ccd36556-00fa-447e-886e-0759f7cb8283","Type":"ContainerDied","Data":"d6c7fb660ed81587e950dadcdccf39ffd092d201870b6b0e71a79b6f8dc96d80"} Dec 15 06:00:01 crc kubenswrapper[4747]: I1215 06:00:01.322047 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29429640-g62tq" event={"ID":"ccd36556-00fa-447e-886e-0759f7cb8283","Type":"ContainerStarted","Data":"7ff2a5e44b4bf04ea19bf9377122202bdf38ced59bf3b5de851d32d6283e75a2"} Dec 15 06:00:02 crc kubenswrapper[4747]: I1215 06:00:02.621294 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29429640-g62tq" Dec 15 06:00:02 crc kubenswrapper[4747]: I1215 06:00:02.724920 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm8wl\" (UniqueName: \"kubernetes.io/projected/ccd36556-00fa-447e-886e-0759f7cb8283-kube-api-access-hm8wl\") pod \"ccd36556-00fa-447e-886e-0759f7cb8283\" (UID: \"ccd36556-00fa-447e-886e-0759f7cb8283\") " Dec 15 06:00:02 crc kubenswrapper[4747]: I1215 06:00:02.725080 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ccd36556-00fa-447e-886e-0759f7cb8283-secret-volume\") pod \"ccd36556-00fa-447e-886e-0759f7cb8283\" (UID: \"ccd36556-00fa-447e-886e-0759f7cb8283\") " Dec 15 06:00:02 crc kubenswrapper[4747]: I1215 06:00:02.725168 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ccd36556-00fa-447e-886e-0759f7cb8283-config-volume\") pod \"ccd36556-00fa-447e-886e-0759f7cb8283\" (UID: \"ccd36556-00fa-447e-886e-0759f7cb8283\") " Dec 15 06:00:02 crc kubenswrapper[4747]: I1215 06:00:02.725902 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccd36556-00fa-447e-886e-0759f7cb8283-config-volume" (OuterVolumeSpecName: "config-volume") pod "ccd36556-00fa-447e-886e-0759f7cb8283" (UID: "ccd36556-00fa-447e-886e-0759f7cb8283"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 06:00:02 crc kubenswrapper[4747]: I1215 06:00:02.730360 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccd36556-00fa-447e-886e-0759f7cb8283-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ccd36556-00fa-447e-886e-0759f7cb8283" (UID: "ccd36556-00fa-447e-886e-0759f7cb8283"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:00:02 crc kubenswrapper[4747]: I1215 06:00:02.730685 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccd36556-00fa-447e-886e-0759f7cb8283-kube-api-access-hm8wl" (OuterVolumeSpecName: "kube-api-access-hm8wl") pod "ccd36556-00fa-447e-886e-0759f7cb8283" (UID: "ccd36556-00fa-447e-886e-0759f7cb8283"). InnerVolumeSpecName "kube-api-access-hm8wl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 06:00:02 crc kubenswrapper[4747]: I1215 06:00:02.827910 4747 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ccd36556-00fa-447e-886e-0759f7cb8283-config-volume\") on node \"crc\" DevicePath \"\"" Dec 15 06:00:02 crc kubenswrapper[4747]: I1215 06:00:02.828260 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hm8wl\" (UniqueName: \"kubernetes.io/projected/ccd36556-00fa-447e-886e-0759f7cb8283-kube-api-access-hm8wl\") on node \"crc\" DevicePath \"\"" Dec 15 06:00:02 crc kubenswrapper[4747]: I1215 06:00:02.828275 4747 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ccd36556-00fa-447e-886e-0759f7cb8283-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 15 06:00:03 crc kubenswrapper[4747]: I1215 06:00:03.344145 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29429640-g62tq" event={"ID":"ccd36556-00fa-447e-886e-0759f7cb8283","Type":"ContainerDied","Data":"7ff2a5e44b4bf04ea19bf9377122202bdf38ced59bf3b5de851d32d6283e75a2"} Dec 15 06:00:03 crc kubenswrapper[4747]: I1215 06:00:03.344194 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ff2a5e44b4bf04ea19bf9377122202bdf38ced59bf3b5de851d32d6283e75a2" Dec 15 06:00:03 crc kubenswrapper[4747]: I1215 06:00:03.344200 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29429640-g62tq" Dec 15 06:00:19 crc kubenswrapper[4747]: I1215 06:00:19.497986 4747 generic.go:334] "Generic (PLEG): container finished" podID="2af42599-0cda-45de-b1fe-9bed5ad6f035" containerID="e27e100aa9563fa8a0446c1718e363fd2274ff80dce249ac89527c9ccbfef931" exitCode=0 Dec 15 06:00:19 crc kubenswrapper[4747]: I1215 06:00:19.498072 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xl6qh" event={"ID":"2af42599-0cda-45de-b1fe-9bed5ad6f035","Type":"ContainerDied","Data":"e27e100aa9563fa8a0446c1718e363fd2274ff80dce249ac89527c9ccbfef931"} Dec 15 06:00:20 crc kubenswrapper[4747]: I1215 06:00:20.872772 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xl6qh" Dec 15 06:00:20 crc kubenswrapper[4747]: I1215 06:00:20.991108 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2af42599-0cda-45de-b1fe-9bed5ad6f035-inventory\") pod \"2af42599-0cda-45de-b1fe-9bed5ad6f035\" (UID: \"2af42599-0cda-45de-b1fe-9bed5ad6f035\") " Dec 15 06:00:20 crc kubenswrapper[4747]: I1215 06:00:20.991177 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clnjs\" (UniqueName: \"kubernetes.io/projected/2af42599-0cda-45de-b1fe-9bed5ad6f035-kube-api-access-clnjs\") pod \"2af42599-0cda-45de-b1fe-9bed5ad6f035\" (UID: \"2af42599-0cda-45de-b1fe-9bed5ad6f035\") " Dec 15 06:00:20 crc kubenswrapper[4747]: I1215 06:00:20.991279 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2af42599-0cda-45de-b1fe-9bed5ad6f035-ssh-key\") pod \"2af42599-0cda-45de-b1fe-9bed5ad6f035\" (UID: \"2af42599-0cda-45de-b1fe-9bed5ad6f035\") " Dec 15 06:00:20 crc kubenswrapper[4747]: I1215 06:00:20.991401 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2af42599-0cda-45de-b1fe-9bed5ad6f035-bootstrap-combined-ca-bundle\") pod \"2af42599-0cda-45de-b1fe-9bed5ad6f035\" (UID: \"2af42599-0cda-45de-b1fe-9bed5ad6f035\") " Dec 15 06:00:20 crc kubenswrapper[4747]: I1215 06:00:20.996906 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2af42599-0cda-45de-b1fe-9bed5ad6f035-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "2af42599-0cda-45de-b1fe-9bed5ad6f035" (UID: "2af42599-0cda-45de-b1fe-9bed5ad6f035"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:00:20 crc kubenswrapper[4747]: I1215 06:00:20.997115 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2af42599-0cda-45de-b1fe-9bed5ad6f035-kube-api-access-clnjs" (OuterVolumeSpecName: "kube-api-access-clnjs") pod "2af42599-0cda-45de-b1fe-9bed5ad6f035" (UID: "2af42599-0cda-45de-b1fe-9bed5ad6f035"). InnerVolumeSpecName "kube-api-access-clnjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 06:00:21 crc kubenswrapper[4747]: I1215 06:00:21.016994 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2af42599-0cda-45de-b1fe-9bed5ad6f035-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2af42599-0cda-45de-b1fe-9bed5ad6f035" (UID: "2af42599-0cda-45de-b1fe-9bed5ad6f035"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:00:21 crc kubenswrapper[4747]: I1215 06:00:21.019169 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2af42599-0cda-45de-b1fe-9bed5ad6f035-inventory" (OuterVolumeSpecName: "inventory") pod "2af42599-0cda-45de-b1fe-9bed5ad6f035" (UID: "2af42599-0cda-45de-b1fe-9bed5ad6f035"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:00:21 crc kubenswrapper[4747]: I1215 06:00:21.092870 4747 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2af42599-0cda-45de-b1fe-9bed5ad6f035-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 15 06:00:21 crc kubenswrapper[4747]: I1215 06:00:21.092900 4747 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2af42599-0cda-45de-b1fe-9bed5ad6f035-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 06:00:21 crc kubenswrapper[4747]: I1215 06:00:21.092917 4747 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2af42599-0cda-45de-b1fe-9bed5ad6f035-inventory\") on node \"crc\" DevicePath \"\"" Dec 15 06:00:21 crc kubenswrapper[4747]: I1215 06:00:21.092962 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clnjs\" (UniqueName: \"kubernetes.io/projected/2af42599-0cda-45de-b1fe-9bed5ad6f035-kube-api-access-clnjs\") on node \"crc\" DevicePath \"\"" Dec 15 06:00:21 crc kubenswrapper[4747]: I1215 06:00:21.522371 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xl6qh" event={"ID":"2af42599-0cda-45de-b1fe-9bed5ad6f035","Type":"ContainerDied","Data":"241ee58d159e5956a0ebf761eb7d818a3336c5f5f3f5c096fad25ddb1dfd40e7"} Dec 15 06:00:21 crc kubenswrapper[4747]: I1215 06:00:21.522432 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="241ee58d159e5956a0ebf761eb7d818a3336c5f5f3f5c096fad25ddb1dfd40e7" Dec 15 06:00:21 crc kubenswrapper[4747]: I1215 06:00:21.522781 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xl6qh" Dec 15 06:00:21 crc kubenswrapper[4747]: I1215 06:00:21.599070 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tx8kh"] Dec 15 06:00:21 crc kubenswrapper[4747]: E1215 06:00:21.599902 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccd36556-00fa-447e-886e-0759f7cb8283" containerName="collect-profiles" Dec 15 06:00:21 crc kubenswrapper[4747]: I1215 06:00:21.599940 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccd36556-00fa-447e-886e-0759f7cb8283" containerName="collect-profiles" Dec 15 06:00:21 crc kubenswrapper[4747]: E1215 06:00:21.599957 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2af42599-0cda-45de-b1fe-9bed5ad6f035" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 15 06:00:21 crc kubenswrapper[4747]: I1215 06:00:21.599965 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="2af42599-0cda-45de-b1fe-9bed5ad6f035" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 15 06:00:21 crc kubenswrapper[4747]: I1215 06:00:21.600209 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="2af42599-0cda-45de-b1fe-9bed5ad6f035" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 15 06:00:21 crc kubenswrapper[4747]: I1215 06:00:21.600243 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccd36556-00fa-447e-886e-0759f7cb8283" containerName="collect-profiles" Dec 15 06:00:21 crc kubenswrapper[4747]: I1215 06:00:21.601041 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tx8kh" Dec 15 06:00:21 crc kubenswrapper[4747]: I1215 06:00:21.604505 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 15 06:00:21 crc kubenswrapper[4747]: I1215 06:00:21.604732 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 15 06:00:21 crc kubenswrapper[4747]: I1215 06:00:21.604896 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bfv8q" Dec 15 06:00:21 crc kubenswrapper[4747]: I1215 06:00:21.605107 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 15 06:00:21 crc kubenswrapper[4747]: I1215 06:00:21.608688 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tx8kh"] Dec 15 06:00:21 crc kubenswrapper[4747]: I1215 06:00:21.703402 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e1073c0b-63fe-4562-bc3c-953bd3697022-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tx8kh\" (UID: \"e1073c0b-63fe-4562-bc3c-953bd3697022\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tx8kh" Dec 15 06:00:21 crc kubenswrapper[4747]: I1215 06:00:21.703498 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1073c0b-63fe-4562-bc3c-953bd3697022-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tx8kh\" (UID: \"e1073c0b-63fe-4562-bc3c-953bd3697022\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tx8kh" Dec 15 06:00:21 crc kubenswrapper[4747]: I1215 06:00:21.703555 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78wbx\" (UniqueName: \"kubernetes.io/projected/e1073c0b-63fe-4562-bc3c-953bd3697022-kube-api-access-78wbx\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tx8kh\" (UID: \"e1073c0b-63fe-4562-bc3c-953bd3697022\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tx8kh" Dec 15 06:00:21 crc kubenswrapper[4747]: I1215 06:00:21.806169 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e1073c0b-63fe-4562-bc3c-953bd3697022-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tx8kh\" (UID: \"e1073c0b-63fe-4562-bc3c-953bd3697022\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tx8kh" Dec 15 06:00:21 crc kubenswrapper[4747]: I1215 06:00:21.806452 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1073c0b-63fe-4562-bc3c-953bd3697022-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tx8kh\" (UID: \"e1073c0b-63fe-4562-bc3c-953bd3697022\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tx8kh" Dec 15 06:00:21 crc kubenswrapper[4747]: I1215 06:00:21.806590 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78wbx\" (UniqueName: \"kubernetes.io/projected/e1073c0b-63fe-4562-bc3c-953bd3697022-kube-api-access-78wbx\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tx8kh\" (UID: \"e1073c0b-63fe-4562-bc3c-953bd3697022\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tx8kh" Dec 15 06:00:21 crc kubenswrapper[4747]: I1215 06:00:21.812227 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1073c0b-63fe-4562-bc3c-953bd3697022-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tx8kh\" (UID: \"e1073c0b-63fe-4562-bc3c-953bd3697022\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tx8kh" Dec 15 06:00:21 crc kubenswrapper[4747]: I1215 06:00:21.812307 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e1073c0b-63fe-4562-bc3c-953bd3697022-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tx8kh\" (UID: \"e1073c0b-63fe-4562-bc3c-953bd3697022\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tx8kh" Dec 15 06:00:21 crc kubenswrapper[4747]: I1215 06:00:21.826390 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78wbx\" (UniqueName: \"kubernetes.io/projected/e1073c0b-63fe-4562-bc3c-953bd3697022-kube-api-access-78wbx\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tx8kh\" (UID: \"e1073c0b-63fe-4562-bc3c-953bd3697022\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tx8kh" Dec 15 06:00:21 crc kubenswrapper[4747]: I1215 06:00:21.922826 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tx8kh" Dec 15 06:00:22 crc kubenswrapper[4747]: I1215 06:00:22.414347 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tx8kh"] Dec 15 06:00:22 crc kubenswrapper[4747]: I1215 06:00:22.533840 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tx8kh" event={"ID":"e1073c0b-63fe-4562-bc3c-953bd3697022","Type":"ContainerStarted","Data":"2ef0cf9a8de13dba363731469f00650648815c73cca10e899061730e15d4a6fb"} Dec 15 06:00:24 crc kubenswrapper[4747]: I1215 06:00:24.558157 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tx8kh" event={"ID":"e1073c0b-63fe-4562-bc3c-953bd3697022","Type":"ContainerStarted","Data":"62a5a1f823fe63a769533a9b3ff7f7e6111d99eb5927ed2a7a9058a11124eca1"} Dec 15 06:00:24 crc kubenswrapper[4747]: I1215 06:00:24.577587 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tx8kh" podStartSLOduration=2.560486907 podStartE2EDuration="3.577573309s" podCreationTimestamp="2025-12-15 06:00:21 +0000 UTC" firstStartedPulling="2025-12-15 06:00:22.41778897 +0000 UTC m=+1386.114300887" lastFinishedPulling="2025-12-15 06:00:23.434875371 +0000 UTC m=+1387.131387289" observedRunningTime="2025-12-15 06:00:24.574550589 +0000 UTC m=+1388.271062506" watchObservedRunningTime="2025-12-15 06:00:24.577573309 +0000 UTC m=+1388.274085226" Dec 15 06:00:58 crc kubenswrapper[4747]: I1215 06:00:58.865235 4747 patch_prober.go:28] interesting pod/machine-config-daemon-nldtn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 15 06:00:58 crc kubenswrapper[4747]: I1215 06:00:58.865797 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 15 06:01:00 crc kubenswrapper[4747]: I1215 06:01:00.144134 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29429641-58jbc"] Dec 15 06:01:00 crc kubenswrapper[4747]: I1215 06:01:00.145501 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29429641-58jbc" Dec 15 06:01:00 crc kubenswrapper[4747]: I1215 06:01:00.156396 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29429641-58jbc"] Dec 15 06:01:00 crc kubenswrapper[4747]: I1215 06:01:00.306735 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79hg5\" (UniqueName: \"kubernetes.io/projected/e52cba4b-1373-4ebc-8e01-f0cb86d099ea-kube-api-access-79hg5\") pod \"keystone-cron-29429641-58jbc\" (UID: \"e52cba4b-1373-4ebc-8e01-f0cb86d099ea\") " pod="openstack/keystone-cron-29429641-58jbc" Dec 15 06:01:00 crc kubenswrapper[4747]: I1215 06:01:00.306827 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e52cba4b-1373-4ebc-8e01-f0cb86d099ea-config-data\") pod \"keystone-cron-29429641-58jbc\" (UID: \"e52cba4b-1373-4ebc-8e01-f0cb86d099ea\") " pod="openstack/keystone-cron-29429641-58jbc" Dec 15 06:01:00 crc kubenswrapper[4747]: I1215 06:01:00.307079 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e52cba4b-1373-4ebc-8e01-f0cb86d099ea-fernet-keys\") pod \"keystone-cron-29429641-58jbc\" (UID: \"e52cba4b-1373-4ebc-8e01-f0cb86d099ea\") " pod="openstack/keystone-cron-29429641-58jbc" Dec 15 06:01:00 crc kubenswrapper[4747]: I1215 06:01:00.307236 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e52cba4b-1373-4ebc-8e01-f0cb86d099ea-combined-ca-bundle\") pod \"keystone-cron-29429641-58jbc\" (UID: \"e52cba4b-1373-4ebc-8e01-f0cb86d099ea\") " pod="openstack/keystone-cron-29429641-58jbc" Dec 15 06:01:00 crc kubenswrapper[4747]: I1215 06:01:00.408538 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e52cba4b-1373-4ebc-8e01-f0cb86d099ea-fernet-keys\") pod \"keystone-cron-29429641-58jbc\" (UID: \"e52cba4b-1373-4ebc-8e01-f0cb86d099ea\") " pod="openstack/keystone-cron-29429641-58jbc" Dec 15 06:01:00 crc kubenswrapper[4747]: I1215 06:01:00.408682 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e52cba4b-1373-4ebc-8e01-f0cb86d099ea-combined-ca-bundle\") pod \"keystone-cron-29429641-58jbc\" (UID: \"e52cba4b-1373-4ebc-8e01-f0cb86d099ea\") " pod="openstack/keystone-cron-29429641-58jbc" Dec 15 06:01:00 crc kubenswrapper[4747]: I1215 06:01:00.408717 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79hg5\" (UniqueName: \"kubernetes.io/projected/e52cba4b-1373-4ebc-8e01-f0cb86d099ea-kube-api-access-79hg5\") pod \"keystone-cron-29429641-58jbc\" (UID: \"e52cba4b-1373-4ebc-8e01-f0cb86d099ea\") " pod="openstack/keystone-cron-29429641-58jbc" Dec 15 06:01:00 crc kubenswrapper[4747]: I1215 06:01:00.408773 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e52cba4b-1373-4ebc-8e01-f0cb86d099ea-config-data\") pod \"keystone-cron-29429641-58jbc\" (UID: \"e52cba4b-1373-4ebc-8e01-f0cb86d099ea\") " pod="openstack/keystone-cron-29429641-58jbc" Dec 15 06:01:00 crc kubenswrapper[4747]: I1215 06:01:00.416401 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e52cba4b-1373-4ebc-8e01-f0cb86d099ea-fernet-keys\") pod \"keystone-cron-29429641-58jbc\" (UID: \"e52cba4b-1373-4ebc-8e01-f0cb86d099ea\") " pod="openstack/keystone-cron-29429641-58jbc" Dec 15 06:01:00 crc kubenswrapper[4747]: I1215 06:01:00.416967 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e52cba4b-1373-4ebc-8e01-f0cb86d099ea-config-data\") pod \"keystone-cron-29429641-58jbc\" (UID: \"e52cba4b-1373-4ebc-8e01-f0cb86d099ea\") " pod="openstack/keystone-cron-29429641-58jbc" Dec 15 06:01:00 crc kubenswrapper[4747]: I1215 06:01:00.417904 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e52cba4b-1373-4ebc-8e01-f0cb86d099ea-combined-ca-bundle\") pod \"keystone-cron-29429641-58jbc\" (UID: \"e52cba4b-1373-4ebc-8e01-f0cb86d099ea\") " pod="openstack/keystone-cron-29429641-58jbc" Dec 15 06:01:00 crc kubenswrapper[4747]: I1215 06:01:00.424722 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79hg5\" (UniqueName: \"kubernetes.io/projected/e52cba4b-1373-4ebc-8e01-f0cb86d099ea-kube-api-access-79hg5\") pod \"keystone-cron-29429641-58jbc\" (UID: \"e52cba4b-1373-4ebc-8e01-f0cb86d099ea\") " pod="openstack/keystone-cron-29429641-58jbc" Dec 15 06:01:00 crc kubenswrapper[4747]: I1215 06:01:00.462005 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29429641-58jbc" Dec 15 06:01:00 crc kubenswrapper[4747]: I1215 06:01:00.904635 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29429641-58jbc"] Dec 15 06:01:00 crc kubenswrapper[4747]: W1215 06:01:00.904863 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode52cba4b_1373_4ebc_8e01_f0cb86d099ea.slice/crio-d93d7499bb608f20ed8396b84f1aa789e71383a89dc405e28dea26d0f4e30795 WatchSource:0}: Error finding container d93d7499bb608f20ed8396b84f1aa789e71383a89dc405e28dea26d0f4e30795: Status 404 returned error can't find the container with id d93d7499bb608f20ed8396b84f1aa789e71383a89dc405e28dea26d0f4e30795 Dec 15 06:01:00 crc kubenswrapper[4747]: I1215 06:01:00.917296 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29429641-58jbc" event={"ID":"e52cba4b-1373-4ebc-8e01-f0cb86d099ea","Type":"ContainerStarted","Data":"d93d7499bb608f20ed8396b84f1aa789e71383a89dc405e28dea26d0f4e30795"} Dec 15 06:01:01 crc kubenswrapper[4747]: I1215 06:01:01.925595 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29429641-58jbc" event={"ID":"e52cba4b-1373-4ebc-8e01-f0cb86d099ea","Type":"ContainerStarted","Data":"804abdcd75617f21a83828e84f2d2fe7003744c68367e85205f0a9464e3eb704"} Dec 15 06:01:01 crc kubenswrapper[4747]: I1215 06:01:01.954661 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29429641-58jbc" podStartSLOduration=1.954646777 podStartE2EDuration="1.954646777s" podCreationTimestamp="2025-12-15 06:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 06:01:01.947800052 +0000 UTC m=+1425.644311969" watchObservedRunningTime="2025-12-15 06:01:01.954646777 +0000 UTC m=+1425.651158684" Dec 15 06:01:03 crc kubenswrapper[4747]: I1215 06:01:03.946149 4747 generic.go:334] "Generic (PLEG): container finished" podID="e52cba4b-1373-4ebc-8e01-f0cb86d099ea" containerID="804abdcd75617f21a83828e84f2d2fe7003744c68367e85205f0a9464e3eb704" exitCode=0 Dec 15 06:01:03 crc kubenswrapper[4747]: I1215 06:01:03.946187 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29429641-58jbc" event={"ID":"e52cba4b-1373-4ebc-8e01-f0cb86d099ea","Type":"ContainerDied","Data":"804abdcd75617f21a83828e84f2d2fe7003744c68367e85205f0a9464e3eb704"} Dec 15 06:01:05 crc kubenswrapper[4747]: I1215 06:01:05.256231 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29429641-58jbc" Dec 15 06:01:05 crc kubenswrapper[4747]: I1215 06:01:05.424659 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79hg5\" (UniqueName: \"kubernetes.io/projected/e52cba4b-1373-4ebc-8e01-f0cb86d099ea-kube-api-access-79hg5\") pod \"e52cba4b-1373-4ebc-8e01-f0cb86d099ea\" (UID: \"e52cba4b-1373-4ebc-8e01-f0cb86d099ea\") " Dec 15 06:01:05 crc kubenswrapper[4747]: I1215 06:01:05.424752 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e52cba4b-1373-4ebc-8e01-f0cb86d099ea-fernet-keys\") pod \"e52cba4b-1373-4ebc-8e01-f0cb86d099ea\" (UID: \"e52cba4b-1373-4ebc-8e01-f0cb86d099ea\") " Dec 15 06:01:05 crc kubenswrapper[4747]: I1215 06:01:05.424820 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e52cba4b-1373-4ebc-8e01-f0cb86d099ea-config-data\") pod \"e52cba4b-1373-4ebc-8e01-f0cb86d099ea\" (UID: \"e52cba4b-1373-4ebc-8e01-f0cb86d099ea\") " Dec 15 06:01:05 crc kubenswrapper[4747]: I1215 06:01:05.424857 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e52cba4b-1373-4ebc-8e01-f0cb86d099ea-combined-ca-bundle\") pod \"e52cba4b-1373-4ebc-8e01-f0cb86d099ea\" (UID: \"e52cba4b-1373-4ebc-8e01-f0cb86d099ea\") " Dec 15 06:01:05 crc kubenswrapper[4747]: I1215 06:01:05.432051 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e52cba4b-1373-4ebc-8e01-f0cb86d099ea-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e52cba4b-1373-4ebc-8e01-f0cb86d099ea" (UID: "e52cba4b-1373-4ebc-8e01-f0cb86d099ea"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:01:05 crc kubenswrapper[4747]: I1215 06:01:05.432181 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e52cba4b-1373-4ebc-8e01-f0cb86d099ea-kube-api-access-79hg5" (OuterVolumeSpecName: "kube-api-access-79hg5") pod "e52cba4b-1373-4ebc-8e01-f0cb86d099ea" (UID: "e52cba4b-1373-4ebc-8e01-f0cb86d099ea"). InnerVolumeSpecName "kube-api-access-79hg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 06:01:05 crc kubenswrapper[4747]: I1215 06:01:05.452801 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e52cba4b-1373-4ebc-8e01-f0cb86d099ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e52cba4b-1373-4ebc-8e01-f0cb86d099ea" (UID: "e52cba4b-1373-4ebc-8e01-f0cb86d099ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:01:05 crc kubenswrapper[4747]: I1215 06:01:05.469741 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e52cba4b-1373-4ebc-8e01-f0cb86d099ea-config-data" (OuterVolumeSpecName: "config-data") pod "e52cba4b-1373-4ebc-8e01-f0cb86d099ea" (UID: "e52cba4b-1373-4ebc-8e01-f0cb86d099ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:01:05 crc kubenswrapper[4747]: I1215 06:01:05.528334 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79hg5\" (UniqueName: \"kubernetes.io/projected/e52cba4b-1373-4ebc-8e01-f0cb86d099ea-kube-api-access-79hg5\") on node \"crc\" DevicePath \"\"" Dec 15 06:01:05 crc kubenswrapper[4747]: I1215 06:01:05.528558 4747 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e52cba4b-1373-4ebc-8e01-f0cb86d099ea-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 15 06:01:05 crc kubenswrapper[4747]: I1215 06:01:05.528575 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e52cba4b-1373-4ebc-8e01-f0cb86d099ea-config-data\") on node \"crc\" DevicePath \"\"" Dec 15 06:01:05 crc kubenswrapper[4747]: I1215 06:01:05.528588 4747 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e52cba4b-1373-4ebc-8e01-f0cb86d099ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 06:01:05 crc kubenswrapper[4747]: I1215 06:01:05.967916 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29429641-58jbc" event={"ID":"e52cba4b-1373-4ebc-8e01-f0cb86d099ea","Type":"ContainerDied","Data":"d93d7499bb608f20ed8396b84f1aa789e71383a89dc405e28dea26d0f4e30795"} Dec 15 06:01:05 crc kubenswrapper[4747]: I1215 06:01:05.967975 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d93d7499bb608f20ed8396b84f1aa789e71383a89dc405e28dea26d0f4e30795" Dec 15 06:01:05 crc kubenswrapper[4747]: I1215 06:01:05.967992 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29429641-58jbc" Dec 15 06:01:26 crc kubenswrapper[4747]: I1215 06:01:26.038088 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-195a-account-create-update-zngwp"] Dec 15 06:01:26 crc kubenswrapper[4747]: I1215 06:01:26.045189 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-wkf55"] Dec 15 06:01:26 crc kubenswrapper[4747]: I1215 06:01:26.053609 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-195a-account-create-update-zngwp"] Dec 15 06:01:26 crc kubenswrapper[4747]: I1215 06:01:26.059875 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-wkf55"] Dec 15 06:01:26 crc kubenswrapper[4747]: I1215 06:01:26.639482 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae24795d-f2ab-4a80-a939-fa3bddb8f742" path="/var/lib/kubelet/pods/ae24795d-f2ab-4a80-a939-fa3bddb8f742/volumes" Dec 15 06:01:26 crc kubenswrapper[4747]: I1215 06:01:26.640114 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f75d9da5-1f04-45c4-87ad-b236ed88c43c" path="/var/lib/kubelet/pods/f75d9da5-1f04-45c4-87ad-b236ed88c43c/volumes" Dec 15 06:01:28 crc kubenswrapper[4747]: I1215 06:01:28.029836 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-693d-account-create-update-87r57"] Dec 15 06:01:28 crc kubenswrapper[4747]: I1215 06:01:28.038594 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-wwznk"] Dec 15 06:01:28 crc kubenswrapper[4747]: I1215 06:01:28.045790 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-b906-account-create-update-5c6w9"] Dec 15 06:01:28 crc kubenswrapper[4747]: I1215 06:01:28.051071 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-z7r2t"] Dec 15 06:01:28 crc kubenswrapper[4747]: I1215 06:01:28.057796 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-693d-account-create-update-87r57"] Dec 15 06:01:28 crc kubenswrapper[4747]: I1215 06:01:28.062875 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-wwznk"] Dec 15 06:01:28 crc kubenswrapper[4747]: I1215 06:01:28.068034 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-b906-account-create-update-5c6w9"] Dec 15 06:01:28 crc kubenswrapper[4747]: I1215 06:01:28.073069 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-z7r2t"] Dec 15 06:01:28 crc kubenswrapper[4747]: I1215 06:01:28.641897 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07706a57-9497-4ff4-8e6f-92c84e806c2a" path="/var/lib/kubelet/pods/07706a57-9497-4ff4-8e6f-92c84e806c2a/volumes" Dec 15 06:01:28 crc kubenswrapper[4747]: I1215 06:01:28.642761 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c319a7c-6074-41bf-9410-03fca21a603c" path="/var/lib/kubelet/pods/9c319a7c-6074-41bf-9410-03fca21a603c/volumes" Dec 15 06:01:28 crc kubenswrapper[4747]: I1215 06:01:28.643371 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f339ea36-b8e2-4309-aeb3-aa8d4a2eb137" path="/var/lib/kubelet/pods/f339ea36-b8e2-4309-aeb3-aa8d4a2eb137/volumes" Dec 15 06:01:28 crc kubenswrapper[4747]: I1215 06:01:28.643966 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8da0056-3ad7-49f6-857d-ad1710ecf088" path="/var/lib/kubelet/pods/f8da0056-3ad7-49f6-857d-ad1710ecf088/volumes" Dec 15 06:01:28 crc kubenswrapper[4747]: I1215 06:01:28.865355 4747 patch_prober.go:28] interesting pod/machine-config-daemon-nldtn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 15 06:01:28 crc kubenswrapper[4747]: I1215 06:01:28.865442 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 15 06:01:31 crc kubenswrapper[4747]: I1215 06:01:31.813050 4747 scope.go:117] "RemoveContainer" containerID="9bca76fc2f69254ae07fdc6160b6b8b10617e59e5f2ae7bec23ee5b93f871480" Dec 15 06:01:31 crc kubenswrapper[4747]: I1215 06:01:31.838646 4747 scope.go:117] "RemoveContainer" containerID="36747122a11638d14c7dffe0437bded122022fab9a12fbae9fedad0901eb325f" Dec 15 06:01:31 crc kubenswrapper[4747]: I1215 06:01:31.881588 4747 scope.go:117] "RemoveContainer" containerID="8d9855913de655fb7f5f4f738bd02cd16538a3be3417c4708183d8ceb37c2436" Dec 15 06:01:31 crc kubenswrapper[4747]: I1215 06:01:31.922211 4747 scope.go:117] "RemoveContainer" containerID="d315be2bddacfc247d3f9cb5e93548805a5811e387d9ad556bb465273972648c" Dec 15 06:01:31 crc kubenswrapper[4747]: I1215 06:01:31.973003 4747 scope.go:117] "RemoveContainer" containerID="029acaa05fab390d66c6e409de12a9371987344077287c949f05ad55002b0352" Dec 15 06:01:32 crc kubenswrapper[4747]: I1215 06:01:32.007217 4747 scope.go:117] "RemoveContainer" containerID="edb611d6ebceb5d0c7c671494d6dd8f48e7f94209d79f3837908f999e3ceccd2" Dec 15 06:01:55 crc kubenswrapper[4747]: I1215 06:01:55.032568 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-jpfz6"] Dec 15 06:01:55 crc kubenswrapper[4747]: I1215 06:01:55.039185 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-jpfz6"] Dec 15 06:01:56 crc kubenswrapper[4747]: I1215 06:01:56.035078 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-492b-account-create-update-cc8pp"] Dec 15 06:01:56 crc kubenswrapper[4747]: I1215 06:01:56.043076 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-05ec-account-create-update-dw6xr"] Dec 15 06:01:56 crc kubenswrapper[4747]: I1215 06:01:56.052414 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-mnk7n"] Dec 15 06:01:56 crc kubenswrapper[4747]: I1215 06:01:56.062397 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-4kqfj"] Dec 15 06:01:56 crc kubenswrapper[4747]: I1215 06:01:56.071694 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-8851-account-create-update-svz5f"] Dec 15 06:01:56 crc kubenswrapper[4747]: I1215 06:01:56.078043 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-492b-account-create-update-cc8pp"] Dec 15 06:01:56 crc kubenswrapper[4747]: I1215 06:01:56.083265 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-4kqfj"] Dec 15 06:01:56 crc kubenswrapper[4747]: I1215 06:01:56.088224 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-mnk7n"] Dec 15 06:01:56 crc kubenswrapper[4747]: I1215 06:01:56.093037 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-05ec-account-create-update-dw6xr"] Dec 15 06:01:56 crc kubenswrapper[4747]: I1215 06:01:56.100813 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-8851-account-create-update-svz5f"] Dec 15 06:01:56 crc kubenswrapper[4747]: I1215 06:01:56.107341 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-bw8hg"] Dec 15 06:01:56 crc kubenswrapper[4747]: I1215 06:01:56.112756 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-bw8hg"] Dec 15 06:01:56 crc kubenswrapper[4747]: I1215 06:01:56.641685 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24e3063e-4cb4-4695-9f2a-0a26592ec3cf" path="/var/lib/kubelet/pods/24e3063e-4cb4-4695-9f2a-0a26592ec3cf/volumes" Dec 15 06:01:56 crc kubenswrapper[4747]: I1215 06:01:56.642354 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32d62287-c73f-47e1-9e64-eb23eaf98dc0" path="/var/lib/kubelet/pods/32d62287-c73f-47e1-9e64-eb23eaf98dc0/volumes" Dec 15 06:01:56 crc kubenswrapper[4747]: I1215 06:01:56.642998 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33c060b9-802c-43ba-8020-2bf8afda93ce" path="/var/lib/kubelet/pods/33c060b9-802c-43ba-8020-2bf8afda93ce/volumes" Dec 15 06:01:56 crc kubenswrapper[4747]: I1215 06:01:56.643566 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4aa0c3cd-1263-4dc6-9d30-b58efa93e393" path="/var/lib/kubelet/pods/4aa0c3cd-1263-4dc6-9d30-b58efa93e393/volumes" Dec 15 06:01:56 crc kubenswrapper[4747]: I1215 06:01:56.644650 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="961c6dcd-1a16-4c9c-90f4-1ec4325e3512" path="/var/lib/kubelet/pods/961c6dcd-1a16-4c9c-90f4-1ec4325e3512/volumes" Dec 15 06:01:56 crc kubenswrapper[4747]: I1215 06:01:56.645235 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9be93f4-5113-41ec-9604-4142d479155d" path="/var/lib/kubelet/pods/f9be93f4-5113-41ec-9604-4142d479155d/volumes" Dec 15 06:01:56 crc kubenswrapper[4747]: I1215 06:01:56.645769 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd62cda8-66b6-4fe4-976d-0723d296a262" path="/var/lib/kubelet/pods/fd62cda8-66b6-4fe4-976d-0723d296a262/volumes" Dec 15 06:01:58 crc kubenswrapper[4747]: I1215 06:01:58.865548 4747 patch_prober.go:28] interesting pod/machine-config-daemon-nldtn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 15 06:01:58 crc kubenswrapper[4747]: I1215 06:01:58.865982 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 15 06:01:58 crc kubenswrapper[4747]: I1215 06:01:58.866046 4747 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" Dec 15 06:01:58 crc kubenswrapper[4747]: I1215 06:01:58.866707 4747 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"080353cc0a0655a6ae55c2b56c2536dc00fa2c3b98890884d8fe921591d73c15"} pod="openshift-machine-config-operator/machine-config-daemon-nldtn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 15 06:01:58 crc kubenswrapper[4747]: I1215 06:01:58.866770 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" containerID="cri-o://080353cc0a0655a6ae55c2b56c2536dc00fa2c3b98890884d8fe921591d73c15" gracePeriod=600 Dec 15 06:01:59 crc kubenswrapper[4747]: I1215 06:01:59.512620 4747 generic.go:334] "Generic (PLEG): container finished" podID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerID="080353cc0a0655a6ae55c2b56c2536dc00fa2c3b98890884d8fe921591d73c15" exitCode=0 Dec 15 06:01:59 crc kubenswrapper[4747]: I1215 06:01:59.512705 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" event={"ID":"1d50e5c9-7ce9-40c0-b942-01031654d27c","Type":"ContainerDied","Data":"080353cc0a0655a6ae55c2b56c2536dc00fa2c3b98890884d8fe921591d73c15"} Dec 15 06:01:59 crc kubenswrapper[4747]: I1215 06:01:59.513228 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" event={"ID":"1d50e5c9-7ce9-40c0-b942-01031654d27c","Type":"ContainerStarted","Data":"1fb2c7cbf1bffaa65209c28e6e7abe2bf250e060aa38bf5208a3810b074d8610"} Dec 15 06:01:59 crc kubenswrapper[4747]: I1215 06:01:59.513265 4747 scope.go:117] "RemoveContainer" containerID="24e2ae64f4e610798e09d68f965f566dada71476cc8359af941aa647f9585c49" Dec 15 06:02:04 crc kubenswrapper[4747]: I1215 06:02:04.027510 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-xf272"] Dec 15 06:02:04 crc kubenswrapper[4747]: I1215 06:02:04.035006 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-xf272"] Dec 15 06:02:04 crc kubenswrapper[4747]: I1215 06:02:04.639514 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25a4a3fa-f57f-41f5-9f10-664cf17f38c1" path="/var/lib/kubelet/pods/25a4a3fa-f57f-41f5-9f10-664cf17f38c1/volumes" Dec 15 06:02:08 crc kubenswrapper[4747]: I1215 06:02:08.629911 4747 generic.go:334] "Generic (PLEG): container finished" podID="e1073c0b-63fe-4562-bc3c-953bd3697022" containerID="62a5a1f823fe63a769533a9b3ff7f7e6111d99eb5927ed2a7a9058a11124eca1" exitCode=0 Dec 15 06:02:08 crc kubenswrapper[4747]: I1215 06:02:08.640149 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tx8kh" event={"ID":"e1073c0b-63fe-4562-bc3c-953bd3697022","Type":"ContainerDied","Data":"62a5a1f823fe63a769533a9b3ff7f7e6111d99eb5927ed2a7a9058a11124eca1"} Dec 15 06:02:10 crc kubenswrapper[4747]: I1215 06:02:10.097766 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tx8kh" Dec 15 06:02:10 crc kubenswrapper[4747]: I1215 06:02:10.199694 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e1073c0b-63fe-4562-bc3c-953bd3697022-ssh-key\") pod \"e1073c0b-63fe-4562-bc3c-953bd3697022\" (UID: \"e1073c0b-63fe-4562-bc3c-953bd3697022\") " Dec 15 06:02:10 crc kubenswrapper[4747]: I1215 06:02:10.199748 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78wbx\" (UniqueName: \"kubernetes.io/projected/e1073c0b-63fe-4562-bc3c-953bd3697022-kube-api-access-78wbx\") pod \"e1073c0b-63fe-4562-bc3c-953bd3697022\" (UID: \"e1073c0b-63fe-4562-bc3c-953bd3697022\") " Dec 15 06:02:10 crc kubenswrapper[4747]: I1215 06:02:10.199782 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1073c0b-63fe-4562-bc3c-953bd3697022-inventory\") pod \"e1073c0b-63fe-4562-bc3c-953bd3697022\" (UID: \"e1073c0b-63fe-4562-bc3c-953bd3697022\") " Dec 15 06:02:10 crc kubenswrapper[4747]: I1215 06:02:10.205043 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1073c0b-63fe-4562-bc3c-953bd3697022-kube-api-access-78wbx" (OuterVolumeSpecName: "kube-api-access-78wbx") pod "e1073c0b-63fe-4562-bc3c-953bd3697022" (UID: "e1073c0b-63fe-4562-bc3c-953bd3697022"). InnerVolumeSpecName "kube-api-access-78wbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 06:02:10 crc kubenswrapper[4747]: I1215 06:02:10.224993 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1073c0b-63fe-4562-bc3c-953bd3697022-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e1073c0b-63fe-4562-bc3c-953bd3697022" (UID: "e1073c0b-63fe-4562-bc3c-953bd3697022"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:02:10 crc kubenswrapper[4747]: I1215 06:02:10.226086 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1073c0b-63fe-4562-bc3c-953bd3697022-inventory" (OuterVolumeSpecName: "inventory") pod "e1073c0b-63fe-4562-bc3c-953bd3697022" (UID: "e1073c0b-63fe-4562-bc3c-953bd3697022"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:02:10 crc kubenswrapper[4747]: I1215 06:02:10.301841 4747 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e1073c0b-63fe-4562-bc3c-953bd3697022-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 15 06:02:10 crc kubenswrapper[4747]: I1215 06:02:10.301870 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78wbx\" (UniqueName: \"kubernetes.io/projected/e1073c0b-63fe-4562-bc3c-953bd3697022-kube-api-access-78wbx\") on node \"crc\" DevicePath \"\"" Dec 15 06:02:10 crc kubenswrapper[4747]: I1215 06:02:10.301884 4747 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1073c0b-63fe-4562-bc3c-953bd3697022-inventory\") on node \"crc\" DevicePath \"\"" Dec 15 06:02:10 crc kubenswrapper[4747]: I1215 06:02:10.660823 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tx8kh" event={"ID":"e1073c0b-63fe-4562-bc3c-953bd3697022","Type":"ContainerDied","Data":"2ef0cf9a8de13dba363731469f00650648815c73cca10e899061730e15d4a6fb"} Dec 15 06:02:10 crc kubenswrapper[4747]: I1215 06:02:10.661183 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ef0cf9a8de13dba363731469f00650648815c73cca10e899061730e15d4a6fb" Dec 15 06:02:10 crc kubenswrapper[4747]: I1215 06:02:10.661258 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tx8kh" Dec 15 06:02:10 crc kubenswrapper[4747]: I1215 06:02:10.734113 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dj5jz"] Dec 15 06:02:10 crc kubenswrapper[4747]: E1215 06:02:10.734511 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1073c0b-63fe-4562-bc3c-953bd3697022" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 15 06:02:10 crc kubenswrapper[4747]: I1215 06:02:10.734534 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1073c0b-63fe-4562-bc3c-953bd3697022" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 15 06:02:10 crc kubenswrapper[4747]: E1215 06:02:10.734556 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e52cba4b-1373-4ebc-8e01-f0cb86d099ea" containerName="keystone-cron" Dec 15 06:02:10 crc kubenswrapper[4747]: I1215 06:02:10.734562 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="e52cba4b-1373-4ebc-8e01-f0cb86d099ea" containerName="keystone-cron" Dec 15 06:02:10 crc kubenswrapper[4747]: I1215 06:02:10.734758 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1073c0b-63fe-4562-bc3c-953bd3697022" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 15 06:02:10 crc kubenswrapper[4747]: I1215 06:02:10.734788 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="e52cba4b-1373-4ebc-8e01-f0cb86d099ea" containerName="keystone-cron" Dec 15 06:02:10 crc kubenswrapper[4747]: I1215 06:02:10.735391 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dj5jz" Dec 15 06:02:10 crc kubenswrapper[4747]: I1215 06:02:10.737698 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bfv8q" Dec 15 06:02:10 crc kubenswrapper[4747]: I1215 06:02:10.737907 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 15 06:02:10 crc kubenswrapper[4747]: I1215 06:02:10.738299 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 15 06:02:10 crc kubenswrapper[4747]: I1215 06:02:10.738487 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 15 06:02:10 crc kubenswrapper[4747]: I1215 06:02:10.744615 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dj5jz"] Dec 15 06:02:10 crc kubenswrapper[4747]: I1215 06:02:10.816289 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/163accf5-f1cd-48a4-93e3-4c7e6172470e-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dj5jz\" (UID: \"163accf5-f1cd-48a4-93e3-4c7e6172470e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dj5jz" Dec 15 06:02:10 crc kubenswrapper[4747]: I1215 06:02:10.816443 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggmcb\" (UniqueName: \"kubernetes.io/projected/163accf5-f1cd-48a4-93e3-4c7e6172470e-kube-api-access-ggmcb\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dj5jz\" (UID: \"163accf5-f1cd-48a4-93e3-4c7e6172470e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dj5jz" Dec 15 06:02:10 crc kubenswrapper[4747]: I1215 06:02:10.816475 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/163accf5-f1cd-48a4-93e3-4c7e6172470e-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dj5jz\" (UID: \"163accf5-f1cd-48a4-93e3-4c7e6172470e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dj5jz" Dec 15 06:02:10 crc kubenswrapper[4747]: I1215 06:02:10.919314 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/163accf5-f1cd-48a4-93e3-4c7e6172470e-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dj5jz\" (UID: \"163accf5-f1cd-48a4-93e3-4c7e6172470e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dj5jz" Dec 15 06:02:10 crc kubenswrapper[4747]: I1215 06:02:10.919505 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggmcb\" (UniqueName: \"kubernetes.io/projected/163accf5-f1cd-48a4-93e3-4c7e6172470e-kube-api-access-ggmcb\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dj5jz\" (UID: \"163accf5-f1cd-48a4-93e3-4c7e6172470e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dj5jz" Dec 15 06:02:10 crc kubenswrapper[4747]: I1215 06:02:10.919582 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/163accf5-f1cd-48a4-93e3-4c7e6172470e-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dj5jz\" (UID: \"163accf5-f1cd-48a4-93e3-4c7e6172470e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dj5jz" Dec 15 06:02:10 crc kubenswrapper[4747]: I1215 06:02:10.924582 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/163accf5-f1cd-48a4-93e3-4c7e6172470e-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dj5jz\" (UID: \"163accf5-f1cd-48a4-93e3-4c7e6172470e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dj5jz" Dec 15 06:02:10 crc kubenswrapper[4747]: I1215 06:02:10.927432 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/163accf5-f1cd-48a4-93e3-4c7e6172470e-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dj5jz\" (UID: \"163accf5-f1cd-48a4-93e3-4c7e6172470e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dj5jz" Dec 15 06:02:10 crc kubenswrapper[4747]: I1215 06:02:10.935246 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggmcb\" (UniqueName: \"kubernetes.io/projected/163accf5-f1cd-48a4-93e3-4c7e6172470e-kube-api-access-ggmcb\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dj5jz\" (UID: \"163accf5-f1cd-48a4-93e3-4c7e6172470e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dj5jz" Dec 15 06:02:11 crc kubenswrapper[4747]: I1215 06:02:11.052285 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dj5jz" Dec 15 06:02:11 crc kubenswrapper[4747]: I1215 06:02:11.570810 4747 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 15 06:02:11 crc kubenswrapper[4747]: I1215 06:02:11.573546 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dj5jz"] Dec 15 06:02:11 crc kubenswrapper[4747]: I1215 06:02:11.670480 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dj5jz" event={"ID":"163accf5-f1cd-48a4-93e3-4c7e6172470e","Type":"ContainerStarted","Data":"d8294c49bdf616c71c278b5758b7e980754a79f7c4dc77bd2a18f77d65eca7a5"} Dec 15 06:02:12 crc kubenswrapper[4747]: I1215 06:02:12.682503 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dj5jz" event={"ID":"163accf5-f1cd-48a4-93e3-4c7e6172470e","Type":"ContainerStarted","Data":"52ea7228cccab5a8d977c9a3562b8f1fa1b6fe7d5c116b30c89bed8fa92247d5"} Dec 15 06:02:12 crc kubenswrapper[4747]: I1215 06:02:12.701276 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dj5jz" podStartSLOduration=1.987146171 podStartE2EDuration="2.701243615s" podCreationTimestamp="2025-12-15 06:02:10 +0000 UTC" firstStartedPulling="2025-12-15 06:02:11.570543345 +0000 UTC m=+1495.267055262" lastFinishedPulling="2025-12-15 06:02:12.284640789 +0000 UTC m=+1495.981152706" observedRunningTime="2025-12-15 06:02:12.697124925 +0000 UTC m=+1496.393636842" watchObservedRunningTime="2025-12-15 06:02:12.701243615 +0000 UTC m=+1496.397755532" Dec 15 06:02:17 crc kubenswrapper[4747]: I1215 06:02:17.314831 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7k42q"] Dec 15 06:02:17 crc kubenswrapper[4747]: I1215 06:02:17.317326 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7k42q" Dec 15 06:02:17 crc kubenswrapper[4747]: I1215 06:02:17.329647 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7k42q"] Dec 15 06:02:17 crc kubenswrapper[4747]: I1215 06:02:17.459214 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5glds\" (UniqueName: \"kubernetes.io/projected/5fa3fb74-286b-4859-9b43-6ec5b3c307c6-kube-api-access-5glds\") pod \"redhat-marketplace-7k42q\" (UID: \"5fa3fb74-286b-4859-9b43-6ec5b3c307c6\") " pod="openshift-marketplace/redhat-marketplace-7k42q" Dec 15 06:02:17 crc kubenswrapper[4747]: I1215 06:02:17.459671 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fa3fb74-286b-4859-9b43-6ec5b3c307c6-utilities\") pod \"redhat-marketplace-7k42q\" (UID: \"5fa3fb74-286b-4859-9b43-6ec5b3c307c6\") " pod="openshift-marketplace/redhat-marketplace-7k42q" Dec 15 06:02:17 crc kubenswrapper[4747]: I1215 06:02:17.459821 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fa3fb74-286b-4859-9b43-6ec5b3c307c6-catalog-content\") pod \"redhat-marketplace-7k42q\" (UID: \"5fa3fb74-286b-4859-9b43-6ec5b3c307c6\") " pod="openshift-marketplace/redhat-marketplace-7k42q" Dec 15 06:02:17 crc kubenswrapper[4747]: I1215 06:02:17.561999 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fa3fb74-286b-4859-9b43-6ec5b3c307c6-utilities\") pod \"redhat-marketplace-7k42q\" (UID: \"5fa3fb74-286b-4859-9b43-6ec5b3c307c6\") " pod="openshift-marketplace/redhat-marketplace-7k42q" Dec 15 06:02:17 crc kubenswrapper[4747]: I1215 06:02:17.562078 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fa3fb74-286b-4859-9b43-6ec5b3c307c6-catalog-content\") pod \"redhat-marketplace-7k42q\" (UID: \"5fa3fb74-286b-4859-9b43-6ec5b3c307c6\") " pod="openshift-marketplace/redhat-marketplace-7k42q" Dec 15 06:02:17 crc kubenswrapper[4747]: I1215 06:02:17.562252 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5glds\" (UniqueName: \"kubernetes.io/projected/5fa3fb74-286b-4859-9b43-6ec5b3c307c6-kube-api-access-5glds\") pod \"redhat-marketplace-7k42q\" (UID: \"5fa3fb74-286b-4859-9b43-6ec5b3c307c6\") " pod="openshift-marketplace/redhat-marketplace-7k42q" Dec 15 06:02:17 crc kubenswrapper[4747]: I1215 06:02:17.563118 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fa3fb74-286b-4859-9b43-6ec5b3c307c6-catalog-content\") pod \"redhat-marketplace-7k42q\" (UID: \"5fa3fb74-286b-4859-9b43-6ec5b3c307c6\") " pod="openshift-marketplace/redhat-marketplace-7k42q" Dec 15 06:02:17 crc kubenswrapper[4747]: I1215 06:02:17.563134 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fa3fb74-286b-4859-9b43-6ec5b3c307c6-utilities\") pod \"redhat-marketplace-7k42q\" (UID: \"5fa3fb74-286b-4859-9b43-6ec5b3c307c6\") " pod="openshift-marketplace/redhat-marketplace-7k42q" Dec 15 06:02:17 crc kubenswrapper[4747]: I1215 06:02:17.586039 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5glds\" (UniqueName: \"kubernetes.io/projected/5fa3fb74-286b-4859-9b43-6ec5b3c307c6-kube-api-access-5glds\") pod \"redhat-marketplace-7k42q\" (UID: \"5fa3fb74-286b-4859-9b43-6ec5b3c307c6\") " pod="openshift-marketplace/redhat-marketplace-7k42q" Dec 15 06:02:17 crc kubenswrapper[4747]: I1215 06:02:17.641140 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7k42q" Dec 15 06:02:18 crc kubenswrapper[4747]: I1215 06:02:18.064746 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7k42q"] Dec 15 06:02:18 crc kubenswrapper[4747]: W1215 06:02:18.071091 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fa3fb74_286b_4859_9b43_6ec5b3c307c6.slice/crio-25c778f5e253db33aa3b536a1982beaddbba7151dfcb8a70baf382393bcc2097 WatchSource:0}: Error finding container 25c778f5e253db33aa3b536a1982beaddbba7151dfcb8a70baf382393bcc2097: Status 404 returned error can't find the container with id 25c778f5e253db33aa3b536a1982beaddbba7151dfcb8a70baf382393bcc2097 Dec 15 06:02:18 crc kubenswrapper[4747]: I1215 06:02:18.746148 4747 generic.go:334] "Generic (PLEG): container finished" podID="5fa3fb74-286b-4859-9b43-6ec5b3c307c6" containerID="3b811b41780a34997d9ff573c20edd4cdae9aebdd3237071dd8a58e724071e69" exitCode=0 Dec 15 06:02:18 crc kubenswrapper[4747]: I1215 06:02:18.746207 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7k42q" event={"ID":"5fa3fb74-286b-4859-9b43-6ec5b3c307c6","Type":"ContainerDied","Data":"3b811b41780a34997d9ff573c20edd4cdae9aebdd3237071dd8a58e724071e69"} Dec 15 06:02:18 crc kubenswrapper[4747]: I1215 06:02:18.746239 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7k42q" event={"ID":"5fa3fb74-286b-4859-9b43-6ec5b3c307c6","Type":"ContainerStarted","Data":"25c778f5e253db33aa3b536a1982beaddbba7151dfcb8a70baf382393bcc2097"} Dec 15 06:02:20 crc kubenswrapper[4747]: I1215 06:02:20.764871 4747 generic.go:334] "Generic (PLEG): container finished" podID="5fa3fb74-286b-4859-9b43-6ec5b3c307c6" containerID="7d7f828069c18c8abfb1031843f3c4e1aee88c3689b8d1aeebc1ba82fe4f5155" exitCode=0 Dec 15 06:02:20 crc kubenswrapper[4747]: I1215 06:02:20.764970 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7k42q" event={"ID":"5fa3fb74-286b-4859-9b43-6ec5b3c307c6","Type":"ContainerDied","Data":"7d7f828069c18c8abfb1031843f3c4e1aee88c3689b8d1aeebc1ba82fe4f5155"} Dec 15 06:02:21 crc kubenswrapper[4747]: I1215 06:02:21.047789 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-fd65c"] Dec 15 06:02:21 crc kubenswrapper[4747]: I1215 06:02:21.055481 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-fd65c"] Dec 15 06:02:21 crc kubenswrapper[4747]: I1215 06:02:21.779302 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7k42q" event={"ID":"5fa3fb74-286b-4859-9b43-6ec5b3c307c6","Type":"ContainerStarted","Data":"dec1324136238fa65e6de5905553371e16f34ebd7d8f3e91f316f034a3d87225"} Dec 15 06:02:21 crc kubenswrapper[4747]: I1215 06:02:21.803020 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7k42q" podStartSLOduration=2.097753198 podStartE2EDuration="4.803000563s" podCreationTimestamp="2025-12-15 06:02:17 +0000 UTC" firstStartedPulling="2025-12-15 06:02:18.748742347 +0000 UTC m=+1502.445254254" lastFinishedPulling="2025-12-15 06:02:21.453989701 +0000 UTC m=+1505.150501619" observedRunningTime="2025-12-15 06:02:21.796493354 +0000 UTC m=+1505.493005272" watchObservedRunningTime="2025-12-15 06:02:21.803000563 +0000 UTC m=+1505.499512490" Dec 15 06:02:22 crc kubenswrapper[4747]: I1215 06:02:22.643518 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c51097b-ac52-49ee-95df-440e1567be8b" path="/var/lib/kubelet/pods/2c51097b-ac52-49ee-95df-440e1567be8b/volumes" Dec 15 06:02:27 crc kubenswrapper[4747]: I1215 06:02:27.641520 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7k42q" Dec 15 06:02:27 crc kubenswrapper[4747]: I1215 06:02:27.641881 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7k42q" Dec 15 06:02:27 crc kubenswrapper[4747]: I1215 06:02:27.680336 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7k42q" Dec 15 06:02:27 crc kubenswrapper[4747]: I1215 06:02:27.877580 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7k42q" Dec 15 06:02:27 crc kubenswrapper[4747]: I1215 06:02:27.921580 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7k42q"] Dec 15 06:02:29 crc kubenswrapper[4747]: I1215 06:02:29.862496 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7k42q" podUID="5fa3fb74-286b-4859-9b43-6ec5b3c307c6" containerName="registry-server" containerID="cri-o://dec1324136238fa65e6de5905553371e16f34ebd7d8f3e91f316f034a3d87225" gracePeriod=2 Dec 15 06:02:30 crc kubenswrapper[4747]: I1215 06:02:30.236792 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7k42q" Dec 15 06:02:30 crc kubenswrapper[4747]: I1215 06:02:30.316504 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5glds\" (UniqueName: \"kubernetes.io/projected/5fa3fb74-286b-4859-9b43-6ec5b3c307c6-kube-api-access-5glds\") pod \"5fa3fb74-286b-4859-9b43-6ec5b3c307c6\" (UID: \"5fa3fb74-286b-4859-9b43-6ec5b3c307c6\") " Dec 15 06:02:30 crc kubenswrapper[4747]: I1215 06:02:30.316636 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fa3fb74-286b-4859-9b43-6ec5b3c307c6-catalog-content\") pod \"5fa3fb74-286b-4859-9b43-6ec5b3c307c6\" (UID: \"5fa3fb74-286b-4859-9b43-6ec5b3c307c6\") " Dec 15 06:02:30 crc kubenswrapper[4747]: I1215 06:02:30.316740 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fa3fb74-286b-4859-9b43-6ec5b3c307c6-utilities\") pod \"5fa3fb74-286b-4859-9b43-6ec5b3c307c6\" (UID: \"5fa3fb74-286b-4859-9b43-6ec5b3c307c6\") " Dec 15 06:02:30 crc kubenswrapper[4747]: I1215 06:02:30.317487 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fa3fb74-286b-4859-9b43-6ec5b3c307c6-utilities" (OuterVolumeSpecName: "utilities") pod "5fa3fb74-286b-4859-9b43-6ec5b3c307c6" (UID: "5fa3fb74-286b-4859-9b43-6ec5b3c307c6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 06:02:30 crc kubenswrapper[4747]: I1215 06:02:30.323051 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fa3fb74-286b-4859-9b43-6ec5b3c307c6-kube-api-access-5glds" (OuterVolumeSpecName: "kube-api-access-5glds") pod "5fa3fb74-286b-4859-9b43-6ec5b3c307c6" (UID: "5fa3fb74-286b-4859-9b43-6ec5b3c307c6"). InnerVolumeSpecName "kube-api-access-5glds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 06:02:30 crc kubenswrapper[4747]: I1215 06:02:30.333245 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fa3fb74-286b-4859-9b43-6ec5b3c307c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5fa3fb74-286b-4859-9b43-6ec5b3c307c6" (UID: "5fa3fb74-286b-4859-9b43-6ec5b3c307c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 06:02:30 crc kubenswrapper[4747]: I1215 06:02:30.419120 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5glds\" (UniqueName: \"kubernetes.io/projected/5fa3fb74-286b-4859-9b43-6ec5b3c307c6-kube-api-access-5glds\") on node \"crc\" DevicePath \"\"" Dec 15 06:02:30 crc kubenswrapper[4747]: I1215 06:02:30.419150 4747 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fa3fb74-286b-4859-9b43-6ec5b3c307c6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 15 06:02:30 crc kubenswrapper[4747]: I1215 06:02:30.419161 4747 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fa3fb74-286b-4859-9b43-6ec5b3c307c6-utilities\") on node \"crc\" DevicePath \"\"" Dec 15 06:02:30 crc kubenswrapper[4747]: I1215 06:02:30.872280 4747 generic.go:334] "Generic (PLEG): container finished" podID="5fa3fb74-286b-4859-9b43-6ec5b3c307c6" containerID="dec1324136238fa65e6de5905553371e16f34ebd7d8f3e91f316f034a3d87225" exitCode=0 Dec 15 06:02:30 crc kubenswrapper[4747]: I1215 06:02:30.872331 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7k42q" event={"ID":"5fa3fb74-286b-4859-9b43-6ec5b3c307c6","Type":"ContainerDied","Data":"dec1324136238fa65e6de5905553371e16f34ebd7d8f3e91f316f034a3d87225"} Dec 15 06:02:30 crc kubenswrapper[4747]: I1215 06:02:30.872352 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7k42q" Dec 15 06:02:30 crc kubenswrapper[4747]: I1215 06:02:30.872390 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7k42q" event={"ID":"5fa3fb74-286b-4859-9b43-6ec5b3c307c6","Type":"ContainerDied","Data":"25c778f5e253db33aa3b536a1982beaddbba7151dfcb8a70baf382393bcc2097"} Dec 15 06:02:30 crc kubenswrapper[4747]: I1215 06:02:30.872415 4747 scope.go:117] "RemoveContainer" containerID="dec1324136238fa65e6de5905553371e16f34ebd7d8f3e91f316f034a3d87225" Dec 15 06:02:30 crc kubenswrapper[4747]: I1215 06:02:30.897269 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7k42q"] Dec 15 06:02:30 crc kubenswrapper[4747]: I1215 06:02:30.897838 4747 scope.go:117] "RemoveContainer" containerID="7d7f828069c18c8abfb1031843f3c4e1aee88c3689b8d1aeebc1ba82fe4f5155" Dec 15 06:02:30 crc kubenswrapper[4747]: I1215 06:02:30.907209 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7k42q"] Dec 15 06:02:30 crc kubenswrapper[4747]: I1215 06:02:30.920269 4747 scope.go:117] "RemoveContainer" containerID="3b811b41780a34997d9ff573c20edd4cdae9aebdd3237071dd8a58e724071e69" Dec 15 06:02:30 crc kubenswrapper[4747]: I1215 06:02:30.948496 4747 scope.go:117] "RemoveContainer" containerID="dec1324136238fa65e6de5905553371e16f34ebd7d8f3e91f316f034a3d87225" Dec 15 06:02:30 crc kubenswrapper[4747]: E1215 06:02:30.948911 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dec1324136238fa65e6de5905553371e16f34ebd7d8f3e91f316f034a3d87225\": container with ID starting with dec1324136238fa65e6de5905553371e16f34ebd7d8f3e91f316f034a3d87225 not found: ID does not exist" containerID="dec1324136238fa65e6de5905553371e16f34ebd7d8f3e91f316f034a3d87225" Dec 15 06:02:30 crc kubenswrapper[4747]: I1215 06:02:30.948972 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dec1324136238fa65e6de5905553371e16f34ebd7d8f3e91f316f034a3d87225"} err="failed to get container status \"dec1324136238fa65e6de5905553371e16f34ebd7d8f3e91f316f034a3d87225\": rpc error: code = NotFound desc = could not find container \"dec1324136238fa65e6de5905553371e16f34ebd7d8f3e91f316f034a3d87225\": container with ID starting with dec1324136238fa65e6de5905553371e16f34ebd7d8f3e91f316f034a3d87225 not found: ID does not exist" Dec 15 06:02:30 crc kubenswrapper[4747]: I1215 06:02:30.949004 4747 scope.go:117] "RemoveContainer" containerID="7d7f828069c18c8abfb1031843f3c4e1aee88c3689b8d1aeebc1ba82fe4f5155" Dec 15 06:02:30 crc kubenswrapper[4747]: E1215 06:02:30.949327 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d7f828069c18c8abfb1031843f3c4e1aee88c3689b8d1aeebc1ba82fe4f5155\": container with ID starting with 7d7f828069c18c8abfb1031843f3c4e1aee88c3689b8d1aeebc1ba82fe4f5155 not found: ID does not exist" containerID="7d7f828069c18c8abfb1031843f3c4e1aee88c3689b8d1aeebc1ba82fe4f5155" Dec 15 06:02:30 crc kubenswrapper[4747]: I1215 06:02:30.949364 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d7f828069c18c8abfb1031843f3c4e1aee88c3689b8d1aeebc1ba82fe4f5155"} err="failed to get container status \"7d7f828069c18c8abfb1031843f3c4e1aee88c3689b8d1aeebc1ba82fe4f5155\": rpc error: code = NotFound desc = could not find container \"7d7f828069c18c8abfb1031843f3c4e1aee88c3689b8d1aeebc1ba82fe4f5155\": container with ID starting with 7d7f828069c18c8abfb1031843f3c4e1aee88c3689b8d1aeebc1ba82fe4f5155 not found: ID does not exist" Dec 15 06:02:30 crc kubenswrapper[4747]: I1215 06:02:30.949390 4747 scope.go:117] "RemoveContainer" containerID="3b811b41780a34997d9ff573c20edd4cdae9aebdd3237071dd8a58e724071e69" Dec 15 06:02:30 crc kubenswrapper[4747]: E1215 06:02:30.949654 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b811b41780a34997d9ff573c20edd4cdae9aebdd3237071dd8a58e724071e69\": container with ID starting with 3b811b41780a34997d9ff573c20edd4cdae9aebdd3237071dd8a58e724071e69 not found: ID does not exist" containerID="3b811b41780a34997d9ff573c20edd4cdae9aebdd3237071dd8a58e724071e69" Dec 15 06:02:30 crc kubenswrapper[4747]: I1215 06:02:30.949683 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b811b41780a34997d9ff573c20edd4cdae9aebdd3237071dd8a58e724071e69"} err="failed to get container status \"3b811b41780a34997d9ff573c20edd4cdae9aebdd3237071dd8a58e724071e69\": rpc error: code = NotFound desc = could not find container \"3b811b41780a34997d9ff573c20edd4cdae9aebdd3237071dd8a58e724071e69\": container with ID starting with 3b811b41780a34997d9ff573c20edd4cdae9aebdd3237071dd8a58e724071e69 not found: ID does not exist" Dec 15 06:02:32 crc kubenswrapper[4747]: I1215 06:02:32.160568 4747 scope.go:117] "RemoveContainer" containerID="c267b0293d2990510552ae08778dea5664cb2e986cfd3442f41ab8f2bda0097b" Dec 15 06:02:32 crc kubenswrapper[4747]: I1215 06:02:32.202795 4747 scope.go:117] "RemoveContainer" containerID="15db45f295818ff3aa48b7e9ae715dbfad735b614677ee1980e23350be0e36ec" Dec 15 06:02:32 crc kubenswrapper[4747]: I1215 06:02:32.220674 4747 scope.go:117] "RemoveContainer" containerID="c429cd9c2f6921834a087af44b405b1d6f24193ef67089d89e66d13ac1c52963" Dec 15 06:02:32 crc kubenswrapper[4747]: I1215 06:02:32.251023 4747 scope.go:117] "RemoveContainer" containerID="7ed0df32aaf83aafd3c89500c065e6f8affd2c5b6a04990de17351de3f6d45df" Dec 15 06:02:32 crc kubenswrapper[4747]: I1215 06:02:32.286189 4747 scope.go:117] "RemoveContainer" containerID="250206dd7cd405d970c21eb1cabe2a6085f3f2aa44cdcaf50f8b6912936d924b" Dec 15 06:02:32 crc kubenswrapper[4747]: I1215 06:02:32.332301 4747 scope.go:117] "RemoveContainer" containerID="2e51d143beb645843dc4fe3c8412f75c63e0096896b9a3b3a1b211572db41418" Dec 15 06:02:32 crc kubenswrapper[4747]: I1215 06:02:32.357326 4747 scope.go:117] "RemoveContainer" containerID="3396993c6639717501fa5ce0308de40abbb2cbc3875ca355a393167f207f2b83" Dec 15 06:02:32 crc kubenswrapper[4747]: I1215 06:02:32.376597 4747 scope.go:117] "RemoveContainer" containerID="b359c8a60c41a492332b71df95c563453b1a044da6a7d0481963b5f8a5f2a115" Dec 15 06:02:32 crc kubenswrapper[4747]: I1215 06:02:32.397002 4747 scope.go:117] "RemoveContainer" containerID="b98f828ff6c111644fdcdcd2cf12c43c91b6c25362947028543d792b3d955dee" Dec 15 06:02:32 crc kubenswrapper[4747]: I1215 06:02:32.640573 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fa3fb74-286b-4859-9b43-6ec5b3c307c6" path="/var/lib/kubelet/pods/5fa3fb74-286b-4859-9b43-6ec5b3c307c6/volumes" Dec 15 06:02:35 crc kubenswrapper[4747]: I1215 06:02:35.027955 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-n8z94"] Dec 15 06:02:35 crc kubenswrapper[4747]: I1215 06:02:35.034041 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-n8z94"] Dec 15 06:02:36 crc kubenswrapper[4747]: I1215 06:02:36.639590 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f285358e-df22-44d4-b3b4-5a2dc69399c6" path="/var/lib/kubelet/pods/f285358e-df22-44d4-b3b4-5a2dc69399c6/volumes" Dec 15 06:02:40 crc kubenswrapper[4747]: I1215 06:02:40.028104 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-d5wlm"] Dec 15 06:02:40 crc kubenswrapper[4747]: I1215 06:02:40.033609 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-zsd2g"] Dec 15 06:02:40 crc kubenswrapper[4747]: I1215 06:02:40.039359 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-d5wlm"] Dec 15 06:02:40 crc kubenswrapper[4747]: I1215 06:02:40.045559 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-zsd2g"] Dec 15 06:02:40 crc kubenswrapper[4747]: I1215 06:02:40.641341 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7acaca59-7888-4a05-8eb3-f925f2f8d44b" path="/var/lib/kubelet/pods/7acaca59-7888-4a05-8eb3-f925f2f8d44b/volumes" Dec 15 06:02:40 crc kubenswrapper[4747]: I1215 06:02:40.644206 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4170a18-3a02-40ea-ab35-838243909dc0" path="/var/lib/kubelet/pods/a4170a18-3a02-40ea-ab35-838243909dc0/volumes" Dec 15 06:02:56 crc kubenswrapper[4747]: I1215 06:02:56.033725 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-4hlv6"] Dec 15 06:02:56 crc kubenswrapper[4747]: I1215 06:02:56.043338 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-4hlv6"] Dec 15 06:02:56 crc kubenswrapper[4747]: I1215 06:02:56.639527 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6" path="/var/lib/kubelet/pods/2ea94c9f-1a7e-4b92-a4c1-f92a16a40fc6/volumes" Dec 15 06:03:08 crc kubenswrapper[4747]: I1215 06:03:08.155488 4747 generic.go:334] "Generic (PLEG): container finished" podID="163accf5-f1cd-48a4-93e3-4c7e6172470e" containerID="52ea7228cccab5a8d977c9a3562b8f1fa1b6fe7d5c116b30c89bed8fa92247d5" exitCode=0 Dec 15 06:03:08 crc kubenswrapper[4747]: I1215 06:03:08.155580 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dj5jz" event={"ID":"163accf5-f1cd-48a4-93e3-4c7e6172470e","Type":"ContainerDied","Data":"52ea7228cccab5a8d977c9a3562b8f1fa1b6fe7d5c116b30c89bed8fa92247d5"} Dec 15 06:03:09 crc kubenswrapper[4747]: I1215 06:03:09.536828 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dj5jz" Dec 15 06:03:09 crc kubenswrapper[4747]: I1215 06:03:09.739854 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggmcb\" (UniqueName: \"kubernetes.io/projected/163accf5-f1cd-48a4-93e3-4c7e6172470e-kube-api-access-ggmcb\") pod \"163accf5-f1cd-48a4-93e3-4c7e6172470e\" (UID: \"163accf5-f1cd-48a4-93e3-4c7e6172470e\") " Dec 15 06:03:09 crc kubenswrapper[4747]: I1215 06:03:09.740357 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/163accf5-f1cd-48a4-93e3-4c7e6172470e-ssh-key\") pod \"163accf5-f1cd-48a4-93e3-4c7e6172470e\" (UID: \"163accf5-f1cd-48a4-93e3-4c7e6172470e\") " Dec 15 06:03:09 crc kubenswrapper[4747]: I1215 06:03:09.740462 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/163accf5-f1cd-48a4-93e3-4c7e6172470e-inventory\") pod \"163accf5-f1cd-48a4-93e3-4c7e6172470e\" (UID: \"163accf5-f1cd-48a4-93e3-4c7e6172470e\") " Dec 15 06:03:09 crc kubenswrapper[4747]: I1215 06:03:09.746438 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/163accf5-f1cd-48a4-93e3-4c7e6172470e-kube-api-access-ggmcb" (OuterVolumeSpecName: "kube-api-access-ggmcb") pod "163accf5-f1cd-48a4-93e3-4c7e6172470e" (UID: "163accf5-f1cd-48a4-93e3-4c7e6172470e"). InnerVolumeSpecName "kube-api-access-ggmcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 06:03:09 crc kubenswrapper[4747]: I1215 06:03:09.762646 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/163accf5-f1cd-48a4-93e3-4c7e6172470e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "163accf5-f1cd-48a4-93e3-4c7e6172470e" (UID: "163accf5-f1cd-48a4-93e3-4c7e6172470e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:03:09 crc kubenswrapper[4747]: I1215 06:03:09.763396 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/163accf5-f1cd-48a4-93e3-4c7e6172470e-inventory" (OuterVolumeSpecName: "inventory") pod "163accf5-f1cd-48a4-93e3-4c7e6172470e" (UID: "163accf5-f1cd-48a4-93e3-4c7e6172470e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:03:09 crc kubenswrapper[4747]: I1215 06:03:09.846661 4747 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/163accf5-f1cd-48a4-93e3-4c7e6172470e-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 15 06:03:09 crc kubenswrapper[4747]: I1215 06:03:09.846705 4747 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/163accf5-f1cd-48a4-93e3-4c7e6172470e-inventory\") on node \"crc\" DevicePath \"\"" Dec 15 06:03:09 crc kubenswrapper[4747]: I1215 06:03:09.846719 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggmcb\" (UniqueName: \"kubernetes.io/projected/163accf5-f1cd-48a4-93e3-4c7e6172470e-kube-api-access-ggmcb\") on node \"crc\" DevicePath \"\"" Dec 15 06:03:10 crc kubenswrapper[4747]: I1215 06:03:10.175807 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dj5jz" event={"ID":"163accf5-f1cd-48a4-93e3-4c7e6172470e","Type":"ContainerDied","Data":"d8294c49bdf616c71c278b5758b7e980754a79f7c4dc77bd2a18f77d65eca7a5"} Dec 15 06:03:10 crc kubenswrapper[4747]: I1215 06:03:10.175870 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8294c49bdf616c71c278b5758b7e980754a79f7c4dc77bd2a18f77d65eca7a5" Dec 15 06:03:10 crc kubenswrapper[4747]: I1215 06:03:10.175882 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dj5jz" Dec 15 06:03:10 crc kubenswrapper[4747]: I1215 06:03:10.249763 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7rcwr"] Dec 15 06:03:10 crc kubenswrapper[4747]: E1215 06:03:10.250384 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fa3fb74-286b-4859-9b43-6ec5b3c307c6" containerName="registry-server" Dec 15 06:03:10 crc kubenswrapper[4747]: I1215 06:03:10.250407 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fa3fb74-286b-4859-9b43-6ec5b3c307c6" containerName="registry-server" Dec 15 06:03:10 crc kubenswrapper[4747]: E1215 06:03:10.250426 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="163accf5-f1cd-48a4-93e3-4c7e6172470e" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 15 06:03:10 crc kubenswrapper[4747]: I1215 06:03:10.250438 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="163accf5-f1cd-48a4-93e3-4c7e6172470e" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 15 06:03:10 crc kubenswrapper[4747]: E1215 06:03:10.250485 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fa3fb74-286b-4859-9b43-6ec5b3c307c6" containerName="extract-utilities" Dec 15 06:03:10 crc kubenswrapper[4747]: I1215 06:03:10.250492 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fa3fb74-286b-4859-9b43-6ec5b3c307c6" containerName="extract-utilities" Dec 15 06:03:10 crc kubenswrapper[4747]: E1215 06:03:10.250517 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fa3fb74-286b-4859-9b43-6ec5b3c307c6" containerName="extract-content" Dec 15 06:03:10 crc kubenswrapper[4747]: I1215 06:03:10.250526 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fa3fb74-286b-4859-9b43-6ec5b3c307c6" containerName="extract-content" Dec 15 06:03:10 crc kubenswrapper[4747]: I1215 06:03:10.250793 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fa3fb74-286b-4859-9b43-6ec5b3c307c6" containerName="registry-server" Dec 15 06:03:10 crc kubenswrapper[4747]: I1215 06:03:10.250835 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="163accf5-f1cd-48a4-93e3-4c7e6172470e" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 15 06:03:10 crc kubenswrapper[4747]: I1215 06:03:10.251856 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7rcwr" Dec 15 06:03:10 crc kubenswrapper[4747]: I1215 06:03:10.253981 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 15 06:03:10 crc kubenswrapper[4747]: I1215 06:03:10.255241 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bfv8q" Dec 15 06:03:10 crc kubenswrapper[4747]: I1215 06:03:10.255440 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 15 06:03:10 crc kubenswrapper[4747]: I1215 06:03:10.261043 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 15 06:03:10 crc kubenswrapper[4747]: I1215 06:03:10.268807 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7rcwr"] Dec 15 06:03:10 crc kubenswrapper[4747]: I1215 06:03:10.359661 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/89aec499-875b-4b3b-8486-b01d8713b1c6-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7rcwr\" (UID: \"89aec499-875b-4b3b-8486-b01d8713b1c6\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7rcwr" Dec 15 06:03:10 crc kubenswrapper[4747]: I1215 06:03:10.359756 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89aec499-875b-4b3b-8486-b01d8713b1c6-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7rcwr\" (UID: \"89aec499-875b-4b3b-8486-b01d8713b1c6\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7rcwr" Dec 15 06:03:10 crc kubenswrapper[4747]: I1215 06:03:10.359814 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcfbm\" (UniqueName: \"kubernetes.io/projected/89aec499-875b-4b3b-8486-b01d8713b1c6-kube-api-access-mcfbm\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7rcwr\" (UID: \"89aec499-875b-4b3b-8486-b01d8713b1c6\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7rcwr" Dec 15 06:03:10 crc kubenswrapper[4747]: I1215 06:03:10.461588 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/89aec499-875b-4b3b-8486-b01d8713b1c6-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7rcwr\" (UID: \"89aec499-875b-4b3b-8486-b01d8713b1c6\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7rcwr" Dec 15 06:03:10 crc kubenswrapper[4747]: I1215 06:03:10.461636 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89aec499-875b-4b3b-8486-b01d8713b1c6-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7rcwr\" (UID: \"89aec499-875b-4b3b-8486-b01d8713b1c6\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7rcwr" Dec 15 06:03:10 crc kubenswrapper[4747]: I1215 06:03:10.461662 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcfbm\" (UniqueName: \"kubernetes.io/projected/89aec499-875b-4b3b-8486-b01d8713b1c6-kube-api-access-mcfbm\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7rcwr\" (UID: \"89aec499-875b-4b3b-8486-b01d8713b1c6\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7rcwr" Dec 15 06:03:10 crc kubenswrapper[4747]: I1215 06:03:10.465388 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/89aec499-875b-4b3b-8486-b01d8713b1c6-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7rcwr\" (UID: \"89aec499-875b-4b3b-8486-b01d8713b1c6\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7rcwr" Dec 15 06:03:10 crc kubenswrapper[4747]: I1215 06:03:10.465419 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89aec499-875b-4b3b-8486-b01d8713b1c6-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7rcwr\" (UID: \"89aec499-875b-4b3b-8486-b01d8713b1c6\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7rcwr" Dec 15 06:03:10 crc kubenswrapper[4747]: I1215 06:03:10.476381 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcfbm\" (UniqueName: \"kubernetes.io/projected/89aec499-875b-4b3b-8486-b01d8713b1c6-kube-api-access-mcfbm\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7rcwr\" (UID: \"89aec499-875b-4b3b-8486-b01d8713b1c6\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7rcwr" Dec 15 06:03:10 crc kubenswrapper[4747]: I1215 06:03:10.574133 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7rcwr" Dec 15 06:03:11 crc kubenswrapper[4747]: I1215 06:03:11.063703 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7rcwr"] Dec 15 06:03:11 crc kubenswrapper[4747]: I1215 06:03:11.187559 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7rcwr" event={"ID":"89aec499-875b-4b3b-8486-b01d8713b1c6","Type":"ContainerStarted","Data":"2d776221a71daecb9f63bb3cbe31d8c200d3dfb14cafa3eb8e65116c913f3353"} Dec 15 06:03:12 crc kubenswrapper[4747]: I1215 06:03:12.197806 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7rcwr" event={"ID":"89aec499-875b-4b3b-8486-b01d8713b1c6","Type":"ContainerStarted","Data":"9ac6edbfef4598806bc2a952f439f6c1bf7f6f6c68f3e41c39821cde647b4e65"} Dec 15 06:03:12 crc kubenswrapper[4747]: I1215 06:03:12.220339 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7rcwr" podStartSLOduration=1.6019138750000002 podStartE2EDuration="2.220309698s" podCreationTimestamp="2025-12-15 06:03:10 +0000 UTC" firstStartedPulling="2025-12-15 06:03:11.068521818 +0000 UTC m=+1554.765033735" lastFinishedPulling="2025-12-15 06:03:11.686917641 +0000 UTC m=+1555.383429558" observedRunningTime="2025-12-15 06:03:12.21753369 +0000 UTC m=+1555.914045608" watchObservedRunningTime="2025-12-15 06:03:12.220309698 +0000 UTC m=+1555.916821615" Dec 15 06:03:16 crc kubenswrapper[4747]: I1215 06:03:16.238917 4747 generic.go:334] "Generic (PLEG): container finished" podID="89aec499-875b-4b3b-8486-b01d8713b1c6" containerID="9ac6edbfef4598806bc2a952f439f6c1bf7f6f6c68f3e41c39821cde647b4e65" exitCode=0 Dec 15 06:03:16 crc kubenswrapper[4747]: I1215 06:03:16.239014 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7rcwr" event={"ID":"89aec499-875b-4b3b-8486-b01d8713b1c6","Type":"ContainerDied","Data":"9ac6edbfef4598806bc2a952f439f6c1bf7f6f6c68f3e41c39821cde647b4e65"} Dec 15 06:03:17 crc kubenswrapper[4747]: I1215 06:03:17.630981 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7rcwr" Dec 15 06:03:17 crc kubenswrapper[4747]: I1215 06:03:17.808563 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/89aec499-875b-4b3b-8486-b01d8713b1c6-ssh-key\") pod \"89aec499-875b-4b3b-8486-b01d8713b1c6\" (UID: \"89aec499-875b-4b3b-8486-b01d8713b1c6\") " Dec 15 06:03:17 crc kubenswrapper[4747]: I1215 06:03:17.808828 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcfbm\" (UniqueName: \"kubernetes.io/projected/89aec499-875b-4b3b-8486-b01d8713b1c6-kube-api-access-mcfbm\") pod \"89aec499-875b-4b3b-8486-b01d8713b1c6\" (UID: \"89aec499-875b-4b3b-8486-b01d8713b1c6\") " Dec 15 06:03:17 crc kubenswrapper[4747]: I1215 06:03:17.808985 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89aec499-875b-4b3b-8486-b01d8713b1c6-inventory\") pod \"89aec499-875b-4b3b-8486-b01d8713b1c6\" (UID: \"89aec499-875b-4b3b-8486-b01d8713b1c6\") " Dec 15 06:03:17 crc kubenswrapper[4747]: I1215 06:03:17.816296 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89aec499-875b-4b3b-8486-b01d8713b1c6-kube-api-access-mcfbm" (OuterVolumeSpecName: "kube-api-access-mcfbm") pod "89aec499-875b-4b3b-8486-b01d8713b1c6" (UID: "89aec499-875b-4b3b-8486-b01d8713b1c6"). InnerVolumeSpecName "kube-api-access-mcfbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 06:03:17 crc kubenswrapper[4747]: I1215 06:03:17.831509 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89aec499-875b-4b3b-8486-b01d8713b1c6-inventory" (OuterVolumeSpecName: "inventory") pod "89aec499-875b-4b3b-8486-b01d8713b1c6" (UID: "89aec499-875b-4b3b-8486-b01d8713b1c6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:03:17 crc kubenswrapper[4747]: I1215 06:03:17.833188 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89aec499-875b-4b3b-8486-b01d8713b1c6-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "89aec499-875b-4b3b-8486-b01d8713b1c6" (UID: "89aec499-875b-4b3b-8486-b01d8713b1c6"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:03:17 crc kubenswrapper[4747]: I1215 06:03:17.911949 4747 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/89aec499-875b-4b3b-8486-b01d8713b1c6-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 15 06:03:17 crc kubenswrapper[4747]: I1215 06:03:17.911983 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcfbm\" (UniqueName: \"kubernetes.io/projected/89aec499-875b-4b3b-8486-b01d8713b1c6-kube-api-access-mcfbm\") on node \"crc\" DevicePath \"\"" Dec 15 06:03:17 crc kubenswrapper[4747]: I1215 06:03:17.911997 4747 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89aec499-875b-4b3b-8486-b01d8713b1c6-inventory\") on node \"crc\" DevicePath \"\"" Dec 15 06:03:18 crc kubenswrapper[4747]: I1215 06:03:18.260179 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7rcwr" event={"ID":"89aec499-875b-4b3b-8486-b01d8713b1c6","Type":"ContainerDied","Data":"2d776221a71daecb9f63bb3cbe31d8c200d3dfb14cafa3eb8e65116c913f3353"} Dec 15 06:03:18 crc kubenswrapper[4747]: I1215 06:03:18.260461 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d776221a71daecb9f63bb3cbe31d8c200d3dfb14cafa3eb8e65116c913f3353" Dec 15 06:03:18 crc kubenswrapper[4747]: I1215 06:03:18.260233 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7rcwr" Dec 15 06:03:18 crc kubenswrapper[4747]: I1215 06:03:18.324381 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-kp4bl"] Dec 15 06:03:18 crc kubenswrapper[4747]: E1215 06:03:18.324893 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89aec499-875b-4b3b-8486-b01d8713b1c6" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 15 06:03:18 crc kubenswrapper[4747]: I1215 06:03:18.324909 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="89aec499-875b-4b3b-8486-b01d8713b1c6" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 15 06:03:18 crc kubenswrapper[4747]: I1215 06:03:18.325177 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="89aec499-875b-4b3b-8486-b01d8713b1c6" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 15 06:03:18 crc kubenswrapper[4747]: I1215 06:03:18.325944 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kp4bl" Dec 15 06:03:18 crc kubenswrapper[4747]: I1215 06:03:18.327556 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 15 06:03:18 crc kubenswrapper[4747]: I1215 06:03:18.329198 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 15 06:03:18 crc kubenswrapper[4747]: I1215 06:03:18.329376 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 15 06:03:18 crc kubenswrapper[4747]: I1215 06:03:18.330913 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bfv8q" Dec 15 06:03:18 crc kubenswrapper[4747]: I1215 06:03:18.331669 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-kp4bl"] Dec 15 06:03:18 crc kubenswrapper[4747]: I1215 06:03:18.421310 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv598\" (UniqueName: \"kubernetes.io/projected/589b27c2-c1d7-423e-b324-10ebc183f51d-kube-api-access-gv598\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kp4bl\" (UID: \"589b27c2-c1d7-423e-b324-10ebc183f51d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kp4bl" Dec 15 06:03:18 crc kubenswrapper[4747]: I1215 06:03:18.421460 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/589b27c2-c1d7-423e-b324-10ebc183f51d-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kp4bl\" (UID: \"589b27c2-c1d7-423e-b324-10ebc183f51d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kp4bl" Dec 15 06:03:18 crc kubenswrapper[4747]: I1215 06:03:18.421603 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/589b27c2-c1d7-423e-b324-10ebc183f51d-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kp4bl\" (UID: \"589b27c2-c1d7-423e-b324-10ebc183f51d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kp4bl" Dec 15 06:03:18 crc kubenswrapper[4747]: I1215 06:03:18.523971 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv598\" (UniqueName: \"kubernetes.io/projected/589b27c2-c1d7-423e-b324-10ebc183f51d-kube-api-access-gv598\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kp4bl\" (UID: \"589b27c2-c1d7-423e-b324-10ebc183f51d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kp4bl" Dec 15 06:03:18 crc kubenswrapper[4747]: I1215 06:03:18.524154 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/589b27c2-c1d7-423e-b324-10ebc183f51d-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kp4bl\" (UID: \"589b27c2-c1d7-423e-b324-10ebc183f51d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kp4bl" Dec 15 06:03:18 crc kubenswrapper[4747]: I1215 06:03:18.524243 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/589b27c2-c1d7-423e-b324-10ebc183f51d-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kp4bl\" (UID: \"589b27c2-c1d7-423e-b324-10ebc183f51d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kp4bl" Dec 15 06:03:18 crc kubenswrapper[4747]: I1215 06:03:18.530099 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/589b27c2-c1d7-423e-b324-10ebc183f51d-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kp4bl\" (UID: \"589b27c2-c1d7-423e-b324-10ebc183f51d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kp4bl" Dec 15 06:03:18 crc kubenswrapper[4747]: I1215 06:03:18.532132 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/589b27c2-c1d7-423e-b324-10ebc183f51d-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kp4bl\" (UID: \"589b27c2-c1d7-423e-b324-10ebc183f51d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kp4bl" Dec 15 06:03:18 crc kubenswrapper[4747]: I1215 06:03:18.539110 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv598\" (UniqueName: \"kubernetes.io/projected/589b27c2-c1d7-423e-b324-10ebc183f51d-kube-api-access-gv598\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kp4bl\" (UID: \"589b27c2-c1d7-423e-b324-10ebc183f51d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kp4bl" Dec 15 06:03:18 crc kubenswrapper[4747]: I1215 06:03:18.643737 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kp4bl" Dec 15 06:03:19 crc kubenswrapper[4747]: I1215 06:03:19.099920 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-kp4bl"] Dec 15 06:03:19 crc kubenswrapper[4747]: I1215 06:03:19.269541 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kp4bl" event={"ID":"589b27c2-c1d7-423e-b324-10ebc183f51d","Type":"ContainerStarted","Data":"4bedaf1d803897b159c87d012e6613a3e796dcde3d4e3ca37f77c768b62d635e"} Dec 15 06:03:20 crc kubenswrapper[4747]: I1215 06:03:20.124879 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dljqm"] Dec 15 06:03:20 crc kubenswrapper[4747]: I1215 06:03:20.127722 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dljqm" Dec 15 06:03:20 crc kubenswrapper[4747]: I1215 06:03:20.134348 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dljqm"] Dec 15 06:03:20 crc kubenswrapper[4747]: I1215 06:03:20.153026 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61ed4e5e-9fba-404a-8e7e-e231ee5d7134-utilities\") pod \"community-operators-dljqm\" (UID: \"61ed4e5e-9fba-404a-8e7e-e231ee5d7134\") " pod="openshift-marketplace/community-operators-dljqm" Dec 15 06:03:20 crc kubenswrapper[4747]: I1215 06:03:20.153258 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tnvc\" (UniqueName: \"kubernetes.io/projected/61ed4e5e-9fba-404a-8e7e-e231ee5d7134-kube-api-access-4tnvc\") pod \"community-operators-dljqm\" (UID: \"61ed4e5e-9fba-404a-8e7e-e231ee5d7134\") " pod="openshift-marketplace/community-operators-dljqm" Dec 15 06:03:20 crc kubenswrapper[4747]: I1215 06:03:20.153381 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61ed4e5e-9fba-404a-8e7e-e231ee5d7134-catalog-content\") pod \"community-operators-dljqm\" (UID: \"61ed4e5e-9fba-404a-8e7e-e231ee5d7134\") " pod="openshift-marketplace/community-operators-dljqm" Dec 15 06:03:20 crc kubenswrapper[4747]: I1215 06:03:20.255672 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tnvc\" (UniqueName: \"kubernetes.io/projected/61ed4e5e-9fba-404a-8e7e-e231ee5d7134-kube-api-access-4tnvc\") pod \"community-operators-dljqm\" (UID: \"61ed4e5e-9fba-404a-8e7e-e231ee5d7134\") " pod="openshift-marketplace/community-operators-dljqm" Dec 15 06:03:20 crc kubenswrapper[4747]: I1215 06:03:20.255729 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61ed4e5e-9fba-404a-8e7e-e231ee5d7134-catalog-content\") pod \"community-operators-dljqm\" (UID: \"61ed4e5e-9fba-404a-8e7e-e231ee5d7134\") " pod="openshift-marketplace/community-operators-dljqm" Dec 15 06:03:20 crc kubenswrapper[4747]: I1215 06:03:20.255794 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61ed4e5e-9fba-404a-8e7e-e231ee5d7134-utilities\") pod \"community-operators-dljqm\" (UID: \"61ed4e5e-9fba-404a-8e7e-e231ee5d7134\") " pod="openshift-marketplace/community-operators-dljqm" Dec 15 06:03:20 crc kubenswrapper[4747]: I1215 06:03:20.256251 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61ed4e5e-9fba-404a-8e7e-e231ee5d7134-utilities\") pod \"community-operators-dljqm\" (UID: \"61ed4e5e-9fba-404a-8e7e-e231ee5d7134\") " pod="openshift-marketplace/community-operators-dljqm" Dec 15 06:03:20 crc kubenswrapper[4747]: I1215 06:03:20.256344 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61ed4e5e-9fba-404a-8e7e-e231ee5d7134-catalog-content\") pod \"community-operators-dljqm\" (UID: \"61ed4e5e-9fba-404a-8e7e-e231ee5d7134\") " pod="openshift-marketplace/community-operators-dljqm" Dec 15 06:03:20 crc kubenswrapper[4747]: I1215 06:03:20.270798 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tnvc\" (UniqueName: \"kubernetes.io/projected/61ed4e5e-9fba-404a-8e7e-e231ee5d7134-kube-api-access-4tnvc\") pod \"community-operators-dljqm\" (UID: \"61ed4e5e-9fba-404a-8e7e-e231ee5d7134\") " pod="openshift-marketplace/community-operators-dljqm" Dec 15 06:03:20 crc kubenswrapper[4747]: I1215 06:03:20.283079 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kp4bl" event={"ID":"589b27c2-c1d7-423e-b324-10ebc183f51d","Type":"ContainerStarted","Data":"dba0a094ac6f9b0b68400878428c62a17fa1bc6efeaecdbb2eb2933c8452300b"} Dec 15 06:03:20 crc kubenswrapper[4747]: I1215 06:03:20.321529 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kp4bl" podStartSLOduration=1.729824708 podStartE2EDuration="2.32148906s" podCreationTimestamp="2025-12-15 06:03:18 +0000 UTC" firstStartedPulling="2025-12-15 06:03:19.105977328 +0000 UTC m=+1562.802489245" lastFinishedPulling="2025-12-15 06:03:19.69764168 +0000 UTC m=+1563.394153597" observedRunningTime="2025-12-15 06:03:20.314600994 +0000 UTC m=+1564.011112911" watchObservedRunningTime="2025-12-15 06:03:20.32148906 +0000 UTC m=+1564.018000976" Dec 15 06:03:20 crc kubenswrapper[4747]: I1215 06:03:20.442513 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dljqm" Dec 15 06:03:20 crc kubenswrapper[4747]: I1215 06:03:20.918424 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dljqm"] Dec 15 06:03:20 crc kubenswrapper[4747]: W1215 06:03:20.922285 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61ed4e5e_9fba_404a_8e7e_e231ee5d7134.slice/crio-693d92fc68a2b6e413a97d3fa8f16b214f3df825afb0b7c94446539a565f38ea WatchSource:0}: Error finding container 693d92fc68a2b6e413a97d3fa8f16b214f3df825afb0b7c94446539a565f38ea: Status 404 returned error can't find the container with id 693d92fc68a2b6e413a97d3fa8f16b214f3df825afb0b7c94446539a565f38ea Dec 15 06:03:21 crc kubenswrapper[4747]: I1215 06:03:21.293880 4747 generic.go:334] "Generic (PLEG): container finished" podID="61ed4e5e-9fba-404a-8e7e-e231ee5d7134" containerID="958fedf577bf7fb86c17238ba72f71ef9136add459327b02db0fc23aa49956fd" exitCode=0 Dec 15 06:03:21 crc kubenswrapper[4747]: I1215 06:03:21.294238 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dljqm" event={"ID":"61ed4e5e-9fba-404a-8e7e-e231ee5d7134","Type":"ContainerDied","Data":"958fedf577bf7fb86c17238ba72f71ef9136add459327b02db0fc23aa49956fd"} Dec 15 06:03:21 crc kubenswrapper[4747]: I1215 06:03:21.294403 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dljqm" event={"ID":"61ed4e5e-9fba-404a-8e7e-e231ee5d7134","Type":"ContainerStarted","Data":"693d92fc68a2b6e413a97d3fa8f16b214f3df825afb0b7c94446539a565f38ea"} Dec 15 06:03:25 crc kubenswrapper[4747]: I1215 06:03:25.332147 4747 generic.go:334] "Generic (PLEG): container finished" podID="61ed4e5e-9fba-404a-8e7e-e231ee5d7134" containerID="ef676e5105cc321168630b75d38ce0316f477f2e0dbed71f94086d2a6012cea2" exitCode=0 Dec 15 06:03:25 crc kubenswrapper[4747]: I1215 06:03:25.332217 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dljqm" event={"ID":"61ed4e5e-9fba-404a-8e7e-e231ee5d7134","Type":"ContainerDied","Data":"ef676e5105cc321168630b75d38ce0316f477f2e0dbed71f94086d2a6012cea2"} Dec 15 06:03:26 crc kubenswrapper[4747]: I1215 06:03:26.347094 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dljqm" event={"ID":"61ed4e5e-9fba-404a-8e7e-e231ee5d7134","Type":"ContainerStarted","Data":"a9589e7e8a2f6c12b75d8c460ccb10810f2eebbdeec148a80c8baed244d6dc21"} Dec 15 06:03:26 crc kubenswrapper[4747]: I1215 06:03:26.369384 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dljqm" podStartSLOduration=1.676550636 podStartE2EDuration="6.369366824s" podCreationTimestamp="2025-12-15 06:03:20 +0000 UTC" firstStartedPulling="2025-12-15 06:03:21.295900575 +0000 UTC m=+1564.992412492" lastFinishedPulling="2025-12-15 06:03:25.988716763 +0000 UTC m=+1569.685228680" observedRunningTime="2025-12-15 06:03:26.364109002 +0000 UTC m=+1570.060620920" watchObservedRunningTime="2025-12-15 06:03:26.369366824 +0000 UTC m=+1570.065878741" Dec 15 06:03:30 crc kubenswrapper[4747]: I1215 06:03:30.442661 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dljqm" Dec 15 06:03:30 crc kubenswrapper[4747]: I1215 06:03:30.443441 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dljqm" Dec 15 06:03:30 crc kubenswrapper[4747]: I1215 06:03:30.483684 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dljqm" Dec 15 06:03:31 crc kubenswrapper[4747]: I1215 06:03:31.430374 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dljqm" Dec 15 06:03:31 crc kubenswrapper[4747]: I1215 06:03:31.500891 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dljqm"] Dec 15 06:03:31 crc kubenswrapper[4747]: I1215 06:03:31.543944 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dnrjw"] Dec 15 06:03:31 crc kubenswrapper[4747]: I1215 06:03:31.544307 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dnrjw" podUID="16bd4ac3-acf8-400e-9413-fed487146d2f" containerName="registry-server" containerID="cri-o://086e7446a0dac6252dcc060a943156ee0f74f157139d4cd7d4baa32615360fee" gracePeriod=2 Dec 15 06:03:31 crc kubenswrapper[4747]: I1215 06:03:31.933793 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dnrjw" Dec 15 06:03:32 crc kubenswrapper[4747]: I1215 06:03:32.096896 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16bd4ac3-acf8-400e-9413-fed487146d2f-utilities\") pod \"16bd4ac3-acf8-400e-9413-fed487146d2f\" (UID: \"16bd4ac3-acf8-400e-9413-fed487146d2f\") " Dec 15 06:03:32 crc kubenswrapper[4747]: I1215 06:03:32.097059 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16bd4ac3-acf8-400e-9413-fed487146d2f-catalog-content\") pod \"16bd4ac3-acf8-400e-9413-fed487146d2f\" (UID: \"16bd4ac3-acf8-400e-9413-fed487146d2f\") " Dec 15 06:03:32 crc kubenswrapper[4747]: I1215 06:03:32.097108 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxp7t\" (UniqueName: \"kubernetes.io/projected/16bd4ac3-acf8-400e-9413-fed487146d2f-kube-api-access-nxp7t\") pod \"16bd4ac3-acf8-400e-9413-fed487146d2f\" (UID: \"16bd4ac3-acf8-400e-9413-fed487146d2f\") " Dec 15 06:03:32 crc kubenswrapper[4747]: I1215 06:03:32.097498 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bd4ac3-acf8-400e-9413-fed487146d2f-utilities" (OuterVolumeSpecName: "utilities") pod "16bd4ac3-acf8-400e-9413-fed487146d2f" (UID: "16bd4ac3-acf8-400e-9413-fed487146d2f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 06:03:32 crc kubenswrapper[4747]: I1215 06:03:32.097977 4747 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16bd4ac3-acf8-400e-9413-fed487146d2f-utilities\") on node \"crc\" DevicePath \"\"" Dec 15 06:03:32 crc kubenswrapper[4747]: I1215 06:03:32.111076 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bd4ac3-acf8-400e-9413-fed487146d2f-kube-api-access-nxp7t" (OuterVolumeSpecName: "kube-api-access-nxp7t") pod "16bd4ac3-acf8-400e-9413-fed487146d2f" (UID: "16bd4ac3-acf8-400e-9413-fed487146d2f"). InnerVolumeSpecName "kube-api-access-nxp7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 06:03:32 crc kubenswrapper[4747]: I1215 06:03:32.142160 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bd4ac3-acf8-400e-9413-fed487146d2f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "16bd4ac3-acf8-400e-9413-fed487146d2f" (UID: "16bd4ac3-acf8-400e-9413-fed487146d2f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 06:03:32 crc kubenswrapper[4747]: I1215 06:03:32.200613 4747 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16bd4ac3-acf8-400e-9413-fed487146d2f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 15 06:03:32 crc kubenswrapper[4747]: I1215 06:03:32.200647 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxp7t\" (UniqueName: \"kubernetes.io/projected/16bd4ac3-acf8-400e-9413-fed487146d2f-kube-api-access-nxp7t\") on node \"crc\" DevicePath \"\"" Dec 15 06:03:32 crc kubenswrapper[4747]: I1215 06:03:32.406593 4747 generic.go:334] "Generic (PLEG): container finished" podID="16bd4ac3-acf8-400e-9413-fed487146d2f" containerID="086e7446a0dac6252dcc060a943156ee0f74f157139d4cd7d4baa32615360fee" exitCode=0 Dec 15 06:03:32 crc kubenswrapper[4747]: I1215 06:03:32.406644 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dnrjw" event={"ID":"16bd4ac3-acf8-400e-9413-fed487146d2f","Type":"ContainerDied","Data":"086e7446a0dac6252dcc060a943156ee0f74f157139d4cd7d4baa32615360fee"} Dec 15 06:03:32 crc kubenswrapper[4747]: I1215 06:03:32.406691 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dnrjw" Dec 15 06:03:32 crc kubenswrapper[4747]: I1215 06:03:32.406719 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dnrjw" event={"ID":"16bd4ac3-acf8-400e-9413-fed487146d2f","Type":"ContainerDied","Data":"594db38486737aee85a6bd5646f1754d7a8ca547d4e036a51ff6cef9ec5dda15"} Dec 15 06:03:32 crc kubenswrapper[4747]: I1215 06:03:32.406751 4747 scope.go:117] "RemoveContainer" containerID="086e7446a0dac6252dcc060a943156ee0f74f157139d4cd7d4baa32615360fee" Dec 15 06:03:32 crc kubenswrapper[4747]: I1215 06:03:32.438776 4747 scope.go:117] "RemoveContainer" containerID="d97c261d87e60ecd5883136156239f704d84d93d305d133275aeef4dc90a3072" Dec 15 06:03:32 crc kubenswrapper[4747]: I1215 06:03:32.442296 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dnrjw"] Dec 15 06:03:32 crc kubenswrapper[4747]: I1215 06:03:32.448705 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dnrjw"] Dec 15 06:03:32 crc kubenswrapper[4747]: I1215 06:03:32.458840 4747 scope.go:117] "RemoveContainer" containerID="231ff0798098ba5a62608c597b0071765506f8ebcb1a05074af743e652952836" Dec 15 06:03:32 crc kubenswrapper[4747]: I1215 06:03:32.493681 4747 scope.go:117] "RemoveContainer" containerID="086e7446a0dac6252dcc060a943156ee0f74f157139d4cd7d4baa32615360fee" Dec 15 06:03:32 crc kubenswrapper[4747]: E1215 06:03:32.494153 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"086e7446a0dac6252dcc060a943156ee0f74f157139d4cd7d4baa32615360fee\": container with ID starting with 086e7446a0dac6252dcc060a943156ee0f74f157139d4cd7d4baa32615360fee not found: ID does not exist" containerID="086e7446a0dac6252dcc060a943156ee0f74f157139d4cd7d4baa32615360fee" Dec 15 06:03:32 crc kubenswrapper[4747]: I1215 06:03:32.494189 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"086e7446a0dac6252dcc060a943156ee0f74f157139d4cd7d4baa32615360fee"} err="failed to get container status \"086e7446a0dac6252dcc060a943156ee0f74f157139d4cd7d4baa32615360fee\": rpc error: code = NotFound desc = could not find container \"086e7446a0dac6252dcc060a943156ee0f74f157139d4cd7d4baa32615360fee\": container with ID starting with 086e7446a0dac6252dcc060a943156ee0f74f157139d4cd7d4baa32615360fee not found: ID does not exist" Dec 15 06:03:32 crc kubenswrapper[4747]: I1215 06:03:32.494220 4747 scope.go:117] "RemoveContainer" containerID="d97c261d87e60ecd5883136156239f704d84d93d305d133275aeef4dc90a3072" Dec 15 06:03:32 crc kubenswrapper[4747]: E1215 06:03:32.494518 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d97c261d87e60ecd5883136156239f704d84d93d305d133275aeef4dc90a3072\": container with ID starting with d97c261d87e60ecd5883136156239f704d84d93d305d133275aeef4dc90a3072 not found: ID does not exist" containerID="d97c261d87e60ecd5883136156239f704d84d93d305d133275aeef4dc90a3072" Dec 15 06:03:32 crc kubenswrapper[4747]: I1215 06:03:32.494538 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d97c261d87e60ecd5883136156239f704d84d93d305d133275aeef4dc90a3072"} err="failed to get container status \"d97c261d87e60ecd5883136156239f704d84d93d305d133275aeef4dc90a3072\": rpc error: code = NotFound desc = could not find container \"d97c261d87e60ecd5883136156239f704d84d93d305d133275aeef4dc90a3072\": container with ID starting with d97c261d87e60ecd5883136156239f704d84d93d305d133275aeef4dc90a3072 not found: ID does not exist" Dec 15 06:03:32 crc kubenswrapper[4747]: I1215 06:03:32.494554 4747 scope.go:117] "RemoveContainer" containerID="231ff0798098ba5a62608c597b0071765506f8ebcb1a05074af743e652952836" Dec 15 06:03:32 crc kubenswrapper[4747]: E1215 06:03:32.494839 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"231ff0798098ba5a62608c597b0071765506f8ebcb1a05074af743e652952836\": container with ID starting with 231ff0798098ba5a62608c597b0071765506f8ebcb1a05074af743e652952836 not found: ID does not exist" containerID="231ff0798098ba5a62608c597b0071765506f8ebcb1a05074af743e652952836" Dec 15 06:03:32 crc kubenswrapper[4747]: I1215 06:03:32.494876 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"231ff0798098ba5a62608c597b0071765506f8ebcb1a05074af743e652952836"} err="failed to get container status \"231ff0798098ba5a62608c597b0071765506f8ebcb1a05074af743e652952836\": rpc error: code = NotFound desc = could not find container \"231ff0798098ba5a62608c597b0071765506f8ebcb1a05074af743e652952836\": container with ID starting with 231ff0798098ba5a62608c597b0071765506f8ebcb1a05074af743e652952836 not found: ID does not exist" Dec 15 06:03:32 crc kubenswrapper[4747]: I1215 06:03:32.553744 4747 scope.go:117] "RemoveContainer" containerID="a4e4c19ca50e70674267038dcfd17e451fb5cb7c03e62689bc0f274c534f4aa5" Dec 15 06:03:32 crc kubenswrapper[4747]: I1215 06:03:32.596127 4747 scope.go:117] "RemoveContainer" containerID="30090006726f4f8f2e101e709cace0d7149cb9395ae58234657fe2d8507ecf04" Dec 15 06:03:32 crc kubenswrapper[4747]: I1215 06:03:32.639622 4747 scope.go:117] "RemoveContainer" containerID="f248b70f5c2337578dfbc201e5e7e8dc36eef21d39d5ee2844f954cf6807cf8b" Dec 15 06:03:32 crc kubenswrapper[4747]: I1215 06:03:32.666718 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bd4ac3-acf8-400e-9413-fed487146d2f" path="/var/lib/kubelet/pods/16bd4ac3-acf8-400e-9413-fed487146d2f/volumes" Dec 15 06:03:32 crc kubenswrapper[4747]: I1215 06:03:32.686744 4747 scope.go:117] "RemoveContainer" containerID="d75b321f864f0dbe87288c0553a53f18aefaae1564116ae1192de1a875b1715b" Dec 15 06:03:42 crc kubenswrapper[4747]: I1215 06:03:42.358273 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pqvjd"] Dec 15 06:03:42 crc kubenswrapper[4747]: E1215 06:03:42.359684 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16bd4ac3-acf8-400e-9413-fed487146d2f" containerName="registry-server" Dec 15 06:03:42 crc kubenswrapper[4747]: I1215 06:03:42.359704 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="16bd4ac3-acf8-400e-9413-fed487146d2f" containerName="registry-server" Dec 15 06:03:42 crc kubenswrapper[4747]: E1215 06:03:42.359753 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16bd4ac3-acf8-400e-9413-fed487146d2f" containerName="extract-content" Dec 15 06:03:42 crc kubenswrapper[4747]: I1215 06:03:42.359760 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="16bd4ac3-acf8-400e-9413-fed487146d2f" containerName="extract-content" Dec 15 06:03:42 crc kubenswrapper[4747]: E1215 06:03:42.359781 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16bd4ac3-acf8-400e-9413-fed487146d2f" containerName="extract-utilities" Dec 15 06:03:42 crc kubenswrapper[4747]: I1215 06:03:42.359788 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="16bd4ac3-acf8-400e-9413-fed487146d2f" containerName="extract-utilities" Dec 15 06:03:42 crc kubenswrapper[4747]: I1215 06:03:42.360098 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="16bd4ac3-acf8-400e-9413-fed487146d2f" containerName="registry-server" Dec 15 06:03:42 crc kubenswrapper[4747]: I1215 06:03:42.362986 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pqvjd" Dec 15 06:03:42 crc kubenswrapper[4747]: I1215 06:03:42.373111 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pqvjd"] Dec 15 06:03:42 crc kubenswrapper[4747]: I1215 06:03:42.413349 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d0c1855-b6ed-4acb-a2b2-3935006720c2-catalog-content\") pod \"certified-operators-pqvjd\" (UID: \"7d0c1855-b6ed-4acb-a2b2-3935006720c2\") " pod="openshift-marketplace/certified-operators-pqvjd" Dec 15 06:03:42 crc kubenswrapper[4747]: I1215 06:03:42.413492 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tbld\" (UniqueName: \"kubernetes.io/projected/7d0c1855-b6ed-4acb-a2b2-3935006720c2-kube-api-access-6tbld\") pod \"certified-operators-pqvjd\" (UID: \"7d0c1855-b6ed-4acb-a2b2-3935006720c2\") " pod="openshift-marketplace/certified-operators-pqvjd" Dec 15 06:03:42 crc kubenswrapper[4747]: I1215 06:03:42.413645 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d0c1855-b6ed-4acb-a2b2-3935006720c2-utilities\") pod \"certified-operators-pqvjd\" (UID: \"7d0c1855-b6ed-4acb-a2b2-3935006720c2\") " pod="openshift-marketplace/certified-operators-pqvjd" Dec 15 06:03:42 crc kubenswrapper[4747]: I1215 06:03:42.514769 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d0c1855-b6ed-4acb-a2b2-3935006720c2-catalog-content\") pod \"certified-operators-pqvjd\" (UID: \"7d0c1855-b6ed-4acb-a2b2-3935006720c2\") " pod="openshift-marketplace/certified-operators-pqvjd" Dec 15 06:03:42 crc kubenswrapper[4747]: I1215 06:03:42.514878 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tbld\" (UniqueName: \"kubernetes.io/projected/7d0c1855-b6ed-4acb-a2b2-3935006720c2-kube-api-access-6tbld\") pod \"certified-operators-pqvjd\" (UID: \"7d0c1855-b6ed-4acb-a2b2-3935006720c2\") " pod="openshift-marketplace/certified-operators-pqvjd" Dec 15 06:03:42 crc kubenswrapper[4747]: I1215 06:03:42.515022 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d0c1855-b6ed-4acb-a2b2-3935006720c2-utilities\") pod \"certified-operators-pqvjd\" (UID: \"7d0c1855-b6ed-4acb-a2b2-3935006720c2\") " pod="openshift-marketplace/certified-operators-pqvjd" Dec 15 06:03:42 crc kubenswrapper[4747]: I1215 06:03:42.515502 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d0c1855-b6ed-4acb-a2b2-3935006720c2-catalog-content\") pod \"certified-operators-pqvjd\" (UID: \"7d0c1855-b6ed-4acb-a2b2-3935006720c2\") " pod="openshift-marketplace/certified-operators-pqvjd" Dec 15 06:03:42 crc kubenswrapper[4747]: I1215 06:03:42.515551 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d0c1855-b6ed-4acb-a2b2-3935006720c2-utilities\") pod \"certified-operators-pqvjd\" (UID: \"7d0c1855-b6ed-4acb-a2b2-3935006720c2\") " pod="openshift-marketplace/certified-operators-pqvjd" Dec 15 06:03:42 crc kubenswrapper[4747]: I1215 06:03:42.533221 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tbld\" (UniqueName: \"kubernetes.io/projected/7d0c1855-b6ed-4acb-a2b2-3935006720c2-kube-api-access-6tbld\") pod \"certified-operators-pqvjd\" (UID: \"7d0c1855-b6ed-4acb-a2b2-3935006720c2\") " pod="openshift-marketplace/certified-operators-pqvjd" Dec 15 06:03:42 crc kubenswrapper[4747]: I1215 06:03:42.681342 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pqvjd" Dec 15 06:03:43 crc kubenswrapper[4747]: I1215 06:03:43.141346 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pqvjd"] Dec 15 06:03:43 crc kubenswrapper[4747]: W1215 06:03:43.147186 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d0c1855_b6ed_4acb_a2b2_3935006720c2.slice/crio-0997e73db7a62af03e0cd5729fddc34a6c0d8ed6487a5d6e3da9cc5679bd7994 WatchSource:0}: Error finding container 0997e73db7a62af03e0cd5729fddc34a6c0d8ed6487a5d6e3da9cc5679bd7994: Status 404 returned error can't find the container with id 0997e73db7a62af03e0cd5729fddc34a6c0d8ed6487a5d6e3da9cc5679bd7994 Dec 15 06:03:43 crc kubenswrapper[4747]: I1215 06:03:43.519298 4747 generic.go:334] "Generic (PLEG): container finished" podID="7d0c1855-b6ed-4acb-a2b2-3935006720c2" containerID="dc81ef17244a094d41503549456ac6eb2a233b14b94d51fdc5ad65b9cc15f714" exitCode=0 Dec 15 06:03:43 crc kubenswrapper[4747]: I1215 06:03:43.519396 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pqvjd" event={"ID":"7d0c1855-b6ed-4acb-a2b2-3935006720c2","Type":"ContainerDied","Data":"dc81ef17244a094d41503549456ac6eb2a233b14b94d51fdc5ad65b9cc15f714"} Dec 15 06:03:43 crc kubenswrapper[4747]: I1215 06:03:43.519657 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pqvjd" event={"ID":"7d0c1855-b6ed-4acb-a2b2-3935006720c2","Type":"ContainerStarted","Data":"0997e73db7a62af03e0cd5729fddc34a6c0d8ed6487a5d6e3da9cc5679bd7994"} Dec 15 06:03:45 crc kubenswrapper[4747]: I1215 06:03:45.547460 4747 generic.go:334] "Generic (PLEG): container finished" podID="7d0c1855-b6ed-4acb-a2b2-3935006720c2" containerID="c8349cc4ddba1e81ef0d8d8319f273cf385406c5a1880f3619d63f50468ba3c1" exitCode=0 Dec 15 06:03:45 crc kubenswrapper[4747]: I1215 06:03:45.550171 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pqvjd" event={"ID":"7d0c1855-b6ed-4acb-a2b2-3935006720c2","Type":"ContainerDied","Data":"c8349cc4ddba1e81ef0d8d8319f273cf385406c5a1880f3619d63f50468ba3c1"} Dec 15 06:03:46 crc kubenswrapper[4747]: I1215 06:03:46.561389 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pqvjd" event={"ID":"7d0c1855-b6ed-4acb-a2b2-3935006720c2","Type":"ContainerStarted","Data":"2f490c15e7f8853e1e7a2c71e4ed2102c9f042f1d3e6a3437c26d60ad6296859"} Dec 15 06:03:46 crc kubenswrapper[4747]: I1215 06:03:46.586436 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pqvjd" podStartSLOduration=2.001235338 podStartE2EDuration="4.586417483s" podCreationTimestamp="2025-12-15 06:03:42 +0000 UTC" firstStartedPulling="2025-12-15 06:03:43.521562484 +0000 UTC m=+1587.218074401" lastFinishedPulling="2025-12-15 06:03:46.106744628 +0000 UTC m=+1589.803256546" observedRunningTime="2025-12-15 06:03:46.581813981 +0000 UTC m=+1590.278325898" watchObservedRunningTime="2025-12-15 06:03:46.586417483 +0000 UTC m=+1590.282929391" Dec 15 06:03:47 crc kubenswrapper[4747]: I1215 06:03:47.572898 4747 generic.go:334] "Generic (PLEG): container finished" podID="589b27c2-c1d7-423e-b324-10ebc183f51d" containerID="dba0a094ac6f9b0b68400878428c62a17fa1bc6efeaecdbb2eb2933c8452300b" exitCode=0 Dec 15 06:03:47 crc kubenswrapper[4747]: I1215 06:03:47.573896 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kp4bl" event={"ID":"589b27c2-c1d7-423e-b324-10ebc183f51d","Type":"ContainerDied","Data":"dba0a094ac6f9b0b68400878428c62a17fa1bc6efeaecdbb2eb2933c8452300b"} Dec 15 06:03:48 crc kubenswrapper[4747]: I1215 06:03:48.949200 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kp4bl" Dec 15 06:03:49 crc kubenswrapper[4747]: I1215 06:03:49.037065 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-s8tch"] Dec 15 06:03:49 crc kubenswrapper[4747]: I1215 06:03:49.041883 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cacf-account-create-update-vg5lk"] Dec 15 06:03:49 crc kubenswrapper[4747]: I1215 06:03:49.049178 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-9pfvt"] Dec 15 06:03:49 crc kubenswrapper[4747]: I1215 06:03:49.057509 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-a081-account-create-update-dwntx"] Dec 15 06:03:49 crc kubenswrapper[4747]: I1215 06:03:49.068779 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-s8tch"] Dec 15 06:03:49 crc kubenswrapper[4747]: I1215 06:03:49.075493 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/589b27c2-c1d7-423e-b324-10ebc183f51d-inventory\") pod \"589b27c2-c1d7-423e-b324-10ebc183f51d\" (UID: \"589b27c2-c1d7-423e-b324-10ebc183f51d\") " Dec 15 06:03:49 crc kubenswrapper[4747]: I1215 06:03:49.075579 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/589b27c2-c1d7-423e-b324-10ebc183f51d-ssh-key\") pod \"589b27c2-c1d7-423e-b324-10ebc183f51d\" (UID: \"589b27c2-c1d7-423e-b324-10ebc183f51d\") " Dec 15 06:03:49 crc kubenswrapper[4747]: I1215 06:03:49.075679 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gv598\" (UniqueName: \"kubernetes.io/projected/589b27c2-c1d7-423e-b324-10ebc183f51d-kube-api-access-gv598\") pod \"589b27c2-c1d7-423e-b324-10ebc183f51d\" (UID: \"589b27c2-c1d7-423e-b324-10ebc183f51d\") " Dec 15 06:03:49 crc kubenswrapper[4747]: I1215 06:03:49.078527 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-5blmr"] Dec 15 06:03:49 crc kubenswrapper[4747]: I1215 06:03:49.083622 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/589b27c2-c1d7-423e-b324-10ebc183f51d-kube-api-access-gv598" (OuterVolumeSpecName: "kube-api-access-gv598") pod "589b27c2-c1d7-423e-b324-10ebc183f51d" (UID: "589b27c2-c1d7-423e-b324-10ebc183f51d"). InnerVolumeSpecName "kube-api-access-gv598". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 06:03:49 crc kubenswrapper[4747]: I1215 06:03:49.085293 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-a081-account-create-update-dwntx"] Dec 15 06:03:49 crc kubenswrapper[4747]: I1215 06:03:49.091031 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-9pfvt"] Dec 15 06:03:49 crc kubenswrapper[4747]: I1215 06:03:49.096244 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cacf-account-create-update-vg5lk"] Dec 15 06:03:49 crc kubenswrapper[4747]: I1215 06:03:49.098355 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/589b27c2-c1d7-423e-b324-10ebc183f51d-inventory" (OuterVolumeSpecName: "inventory") pod "589b27c2-c1d7-423e-b324-10ebc183f51d" (UID: "589b27c2-c1d7-423e-b324-10ebc183f51d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:03:49 crc kubenswrapper[4747]: I1215 06:03:49.098641 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/589b27c2-c1d7-423e-b324-10ebc183f51d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "589b27c2-c1d7-423e-b324-10ebc183f51d" (UID: "589b27c2-c1d7-423e-b324-10ebc183f51d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:03:49 crc kubenswrapper[4747]: I1215 06:03:49.101253 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-5blmr"] Dec 15 06:03:49 crc kubenswrapper[4747]: I1215 06:03:49.178509 4747 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/589b27c2-c1d7-423e-b324-10ebc183f51d-inventory\") on node \"crc\" DevicePath \"\"" Dec 15 06:03:49 crc kubenswrapper[4747]: I1215 06:03:49.178688 4747 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/589b27c2-c1d7-423e-b324-10ebc183f51d-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 15 06:03:49 crc kubenswrapper[4747]: I1215 06:03:49.178747 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gv598\" (UniqueName: \"kubernetes.io/projected/589b27c2-c1d7-423e-b324-10ebc183f51d-kube-api-access-gv598\") on node \"crc\" DevicePath \"\"" Dec 15 06:03:49 crc kubenswrapper[4747]: I1215 06:03:49.612818 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kp4bl" event={"ID":"589b27c2-c1d7-423e-b324-10ebc183f51d","Type":"ContainerDied","Data":"4bedaf1d803897b159c87d012e6613a3e796dcde3d4e3ca37f77c768b62d635e"} Dec 15 06:03:49 crc kubenswrapper[4747]: I1215 06:03:49.612857 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bedaf1d803897b159c87d012e6613a3e796dcde3d4e3ca37f77c768b62d635e" Dec 15 06:03:49 crc kubenswrapper[4747]: I1215 06:03:49.612876 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kp4bl" Dec 15 06:03:49 crc kubenswrapper[4747]: I1215 06:03:49.671660 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-594cx"] Dec 15 06:03:49 crc kubenswrapper[4747]: E1215 06:03:49.672181 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="589b27c2-c1d7-423e-b324-10ebc183f51d" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 15 06:03:49 crc kubenswrapper[4747]: I1215 06:03:49.672213 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="589b27c2-c1d7-423e-b324-10ebc183f51d" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 15 06:03:49 crc kubenswrapper[4747]: I1215 06:03:49.672466 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="589b27c2-c1d7-423e-b324-10ebc183f51d" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 15 06:03:49 crc kubenswrapper[4747]: I1215 06:03:49.674557 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-594cx" Dec 15 06:03:49 crc kubenswrapper[4747]: I1215 06:03:49.676583 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 15 06:03:49 crc kubenswrapper[4747]: I1215 06:03:49.676643 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 15 06:03:49 crc kubenswrapper[4747]: I1215 06:03:49.676733 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 15 06:03:49 crc kubenswrapper[4747]: I1215 06:03:49.677708 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bfv8q" Dec 15 06:03:49 crc kubenswrapper[4747]: I1215 06:03:49.681782 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-594cx"] Dec 15 06:03:49 crc kubenswrapper[4747]: I1215 06:03:49.791321 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23f33913-7e72-4eee-bd81-3561906af7fb-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-594cx\" (UID: \"23f33913-7e72-4eee-bd81-3561906af7fb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-594cx" Dec 15 06:03:49 crc kubenswrapper[4747]: I1215 06:03:49.791678 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/23f33913-7e72-4eee-bd81-3561906af7fb-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-594cx\" (UID: \"23f33913-7e72-4eee-bd81-3561906af7fb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-594cx" Dec 15 06:03:49 crc kubenswrapper[4747]: I1215 06:03:49.791855 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69r8k\" (UniqueName: \"kubernetes.io/projected/23f33913-7e72-4eee-bd81-3561906af7fb-kube-api-access-69r8k\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-594cx\" (UID: \"23f33913-7e72-4eee-bd81-3561906af7fb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-594cx" Dec 15 06:03:49 crc kubenswrapper[4747]: I1215 06:03:49.894326 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23f33913-7e72-4eee-bd81-3561906af7fb-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-594cx\" (UID: \"23f33913-7e72-4eee-bd81-3561906af7fb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-594cx" Dec 15 06:03:49 crc kubenswrapper[4747]: I1215 06:03:49.894414 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/23f33913-7e72-4eee-bd81-3561906af7fb-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-594cx\" (UID: \"23f33913-7e72-4eee-bd81-3561906af7fb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-594cx" Dec 15 06:03:49 crc kubenswrapper[4747]: I1215 06:03:49.894625 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69r8k\" (UniqueName: \"kubernetes.io/projected/23f33913-7e72-4eee-bd81-3561906af7fb-kube-api-access-69r8k\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-594cx\" (UID: \"23f33913-7e72-4eee-bd81-3561906af7fb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-594cx" Dec 15 06:03:49 crc kubenswrapper[4747]: I1215 06:03:49.900878 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/23f33913-7e72-4eee-bd81-3561906af7fb-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-594cx\" (UID: \"23f33913-7e72-4eee-bd81-3561906af7fb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-594cx" Dec 15 06:03:49 crc kubenswrapper[4747]: I1215 06:03:49.901966 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23f33913-7e72-4eee-bd81-3561906af7fb-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-594cx\" (UID: \"23f33913-7e72-4eee-bd81-3561906af7fb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-594cx" Dec 15 06:03:49 crc kubenswrapper[4747]: I1215 06:03:49.910146 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69r8k\" (UniqueName: \"kubernetes.io/projected/23f33913-7e72-4eee-bd81-3561906af7fb-kube-api-access-69r8k\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-594cx\" (UID: \"23f33913-7e72-4eee-bd81-3561906af7fb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-594cx" Dec 15 06:03:49 crc kubenswrapper[4747]: I1215 06:03:49.997750 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-594cx" Dec 15 06:03:50 crc kubenswrapper[4747]: I1215 06:03:50.031820 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-8e39-account-create-update-db98v"] Dec 15 06:03:50 crc kubenswrapper[4747]: I1215 06:03:50.047011 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-8e39-account-create-update-db98v"] Dec 15 06:03:50 crc kubenswrapper[4747]: I1215 06:03:50.492431 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-594cx"] Dec 15 06:03:50 crc kubenswrapper[4747]: I1215 06:03:50.621531 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-594cx" event={"ID":"23f33913-7e72-4eee-bd81-3561906af7fb","Type":"ContainerStarted","Data":"d4ffbc21317c39821d97e78c9094ff356bdbc77e67a6f2ad1133bf38261ba28a"} Dec 15 06:03:50 crc kubenswrapper[4747]: I1215 06:03:50.641181 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3354a33b-f658-4c99-a32c-015e29ab16e4" path="/var/lib/kubelet/pods/3354a33b-f658-4c99-a32c-015e29ab16e4/volumes" Dec 15 06:03:50 crc kubenswrapper[4747]: I1215 06:03:50.641743 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bec1c9c-b1e2-4dbc-8f35-e3b1d13f0dc8" path="/var/lib/kubelet/pods/3bec1c9c-b1e2-4dbc-8f35-e3b1d13f0dc8/volumes" Dec 15 06:03:50 crc kubenswrapper[4747]: I1215 06:03:50.642300 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66655803-661a-4934-8483-30529581438f" path="/var/lib/kubelet/pods/66655803-661a-4934-8483-30529581438f/volumes" Dec 15 06:03:50 crc kubenswrapper[4747]: I1215 06:03:50.642808 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8da8364f-08ef-4037-96ef-560876f54025" path="/var/lib/kubelet/pods/8da8364f-08ef-4037-96ef-560876f54025/volumes" Dec 15 06:03:50 crc kubenswrapper[4747]: I1215 06:03:50.643905 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="917c48fe-e9b6-40da-8a57-a107fb5beb34" path="/var/lib/kubelet/pods/917c48fe-e9b6-40da-8a57-a107fb5beb34/volumes" Dec 15 06:03:50 crc kubenswrapper[4747]: I1215 06:03:50.644613 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc47afcf-b663-41be-86d7-a77108e5020c" path="/var/lib/kubelet/pods/bc47afcf-b663-41be-86d7-a77108e5020c/volumes" Dec 15 06:03:51 crc kubenswrapper[4747]: I1215 06:03:51.635458 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-594cx" event={"ID":"23f33913-7e72-4eee-bd81-3561906af7fb","Type":"ContainerStarted","Data":"0fa10d180119a2ae4e31ee370f6e76ae350cb1e53f1c2698b906311c9b72bfaf"} Dec 15 06:03:51 crc kubenswrapper[4747]: I1215 06:03:51.656318 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-594cx" podStartSLOduration=2.130394328 podStartE2EDuration="2.656304005s" podCreationTimestamp="2025-12-15 06:03:49 +0000 UTC" firstStartedPulling="2025-12-15 06:03:50.496357107 +0000 UTC m=+1594.192869013" lastFinishedPulling="2025-12-15 06:03:51.022266772 +0000 UTC m=+1594.718778690" observedRunningTime="2025-12-15 06:03:51.651591127 +0000 UTC m=+1595.348103044" watchObservedRunningTime="2025-12-15 06:03:51.656304005 +0000 UTC m=+1595.352815912" Dec 15 06:03:52 crc kubenswrapper[4747]: I1215 06:03:52.682377 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pqvjd" Dec 15 06:03:52 crc kubenswrapper[4747]: I1215 06:03:52.682458 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pqvjd" Dec 15 06:03:52 crc kubenswrapper[4747]: I1215 06:03:52.718865 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pqvjd" Dec 15 06:03:53 crc kubenswrapper[4747]: I1215 06:03:53.697894 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pqvjd" Dec 15 06:03:53 crc kubenswrapper[4747]: I1215 06:03:53.747027 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pqvjd"] Dec 15 06:03:55 crc kubenswrapper[4747]: I1215 06:03:55.677362 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pqvjd" podUID="7d0c1855-b6ed-4acb-a2b2-3935006720c2" containerName="registry-server" containerID="cri-o://2f490c15e7f8853e1e7a2c71e4ed2102c9f042f1d3e6a3437c26d60ad6296859" gracePeriod=2 Dec 15 06:03:56 crc kubenswrapper[4747]: I1215 06:03:56.053076 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pqvjd" Dec 15 06:03:56 crc kubenswrapper[4747]: I1215 06:03:56.116743 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d0c1855-b6ed-4acb-a2b2-3935006720c2-utilities\") pod \"7d0c1855-b6ed-4acb-a2b2-3935006720c2\" (UID: \"7d0c1855-b6ed-4acb-a2b2-3935006720c2\") " Dec 15 06:03:56 crc kubenswrapper[4747]: I1215 06:03:56.117259 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d0c1855-b6ed-4acb-a2b2-3935006720c2-catalog-content\") pod \"7d0c1855-b6ed-4acb-a2b2-3935006720c2\" (UID: \"7d0c1855-b6ed-4acb-a2b2-3935006720c2\") " Dec 15 06:03:56 crc kubenswrapper[4747]: I1215 06:03:56.117295 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tbld\" (UniqueName: \"kubernetes.io/projected/7d0c1855-b6ed-4acb-a2b2-3935006720c2-kube-api-access-6tbld\") pod \"7d0c1855-b6ed-4acb-a2b2-3935006720c2\" (UID: \"7d0c1855-b6ed-4acb-a2b2-3935006720c2\") " Dec 15 06:03:56 crc kubenswrapper[4747]: I1215 06:03:56.117657 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d0c1855-b6ed-4acb-a2b2-3935006720c2-utilities" (OuterVolumeSpecName: "utilities") pod "7d0c1855-b6ed-4acb-a2b2-3935006720c2" (UID: "7d0c1855-b6ed-4acb-a2b2-3935006720c2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 06:03:56 crc kubenswrapper[4747]: I1215 06:03:56.118340 4747 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d0c1855-b6ed-4acb-a2b2-3935006720c2-utilities\") on node \"crc\" DevicePath \"\"" Dec 15 06:03:56 crc kubenswrapper[4747]: I1215 06:03:56.124267 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d0c1855-b6ed-4acb-a2b2-3935006720c2-kube-api-access-6tbld" (OuterVolumeSpecName: "kube-api-access-6tbld") pod "7d0c1855-b6ed-4acb-a2b2-3935006720c2" (UID: "7d0c1855-b6ed-4acb-a2b2-3935006720c2"). InnerVolumeSpecName "kube-api-access-6tbld". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 06:03:56 crc kubenswrapper[4747]: I1215 06:03:56.157751 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d0c1855-b6ed-4acb-a2b2-3935006720c2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7d0c1855-b6ed-4acb-a2b2-3935006720c2" (UID: "7d0c1855-b6ed-4acb-a2b2-3935006720c2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 06:03:56 crc kubenswrapper[4747]: I1215 06:03:56.220549 4747 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d0c1855-b6ed-4acb-a2b2-3935006720c2-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 15 06:03:56 crc kubenswrapper[4747]: I1215 06:03:56.220587 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tbld\" (UniqueName: \"kubernetes.io/projected/7d0c1855-b6ed-4acb-a2b2-3935006720c2-kube-api-access-6tbld\") on node \"crc\" DevicePath \"\"" Dec 15 06:03:56 crc kubenswrapper[4747]: I1215 06:03:56.688940 4747 generic.go:334] "Generic (PLEG): container finished" podID="7d0c1855-b6ed-4acb-a2b2-3935006720c2" containerID="2f490c15e7f8853e1e7a2c71e4ed2102c9f042f1d3e6a3437c26d60ad6296859" exitCode=0 Dec 15 06:03:56 crc kubenswrapper[4747]: I1215 06:03:56.688962 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pqvjd" event={"ID":"7d0c1855-b6ed-4acb-a2b2-3935006720c2","Type":"ContainerDied","Data":"2f490c15e7f8853e1e7a2c71e4ed2102c9f042f1d3e6a3437c26d60ad6296859"} Dec 15 06:03:56 crc kubenswrapper[4747]: I1215 06:03:56.689054 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pqvjd" event={"ID":"7d0c1855-b6ed-4acb-a2b2-3935006720c2","Type":"ContainerDied","Data":"0997e73db7a62af03e0cd5729fddc34a6c0d8ed6487a5d6e3da9cc5679bd7994"} Dec 15 06:03:56 crc kubenswrapper[4747]: I1215 06:03:56.689073 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pqvjd" Dec 15 06:03:56 crc kubenswrapper[4747]: I1215 06:03:56.689081 4747 scope.go:117] "RemoveContainer" containerID="2f490c15e7f8853e1e7a2c71e4ed2102c9f042f1d3e6a3437c26d60ad6296859" Dec 15 06:03:56 crc kubenswrapper[4747]: I1215 06:03:56.713612 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pqvjd"] Dec 15 06:03:56 crc kubenswrapper[4747]: I1215 06:03:56.714036 4747 scope.go:117] "RemoveContainer" containerID="c8349cc4ddba1e81ef0d8d8319f273cf385406c5a1880f3619d63f50468ba3c1" Dec 15 06:03:56 crc kubenswrapper[4747]: I1215 06:03:56.721762 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pqvjd"] Dec 15 06:03:56 crc kubenswrapper[4747]: I1215 06:03:56.732687 4747 scope.go:117] "RemoveContainer" containerID="dc81ef17244a094d41503549456ac6eb2a233b14b94d51fdc5ad65b9cc15f714" Dec 15 06:03:56 crc kubenswrapper[4747]: I1215 06:03:56.761756 4747 scope.go:117] "RemoveContainer" containerID="2f490c15e7f8853e1e7a2c71e4ed2102c9f042f1d3e6a3437c26d60ad6296859" Dec 15 06:03:56 crc kubenswrapper[4747]: E1215 06:03:56.762144 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f490c15e7f8853e1e7a2c71e4ed2102c9f042f1d3e6a3437c26d60ad6296859\": container with ID starting with 2f490c15e7f8853e1e7a2c71e4ed2102c9f042f1d3e6a3437c26d60ad6296859 not found: ID does not exist" containerID="2f490c15e7f8853e1e7a2c71e4ed2102c9f042f1d3e6a3437c26d60ad6296859" Dec 15 06:03:56 crc kubenswrapper[4747]: I1215 06:03:56.762189 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f490c15e7f8853e1e7a2c71e4ed2102c9f042f1d3e6a3437c26d60ad6296859"} err="failed to get container status \"2f490c15e7f8853e1e7a2c71e4ed2102c9f042f1d3e6a3437c26d60ad6296859\": rpc error: code = NotFound desc = could not find container \"2f490c15e7f8853e1e7a2c71e4ed2102c9f042f1d3e6a3437c26d60ad6296859\": container with ID starting with 2f490c15e7f8853e1e7a2c71e4ed2102c9f042f1d3e6a3437c26d60ad6296859 not found: ID does not exist" Dec 15 06:03:56 crc kubenswrapper[4747]: I1215 06:03:56.762225 4747 scope.go:117] "RemoveContainer" containerID="c8349cc4ddba1e81ef0d8d8319f273cf385406c5a1880f3619d63f50468ba3c1" Dec 15 06:03:56 crc kubenswrapper[4747]: E1215 06:03:56.762546 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8349cc4ddba1e81ef0d8d8319f273cf385406c5a1880f3619d63f50468ba3c1\": container with ID starting with c8349cc4ddba1e81ef0d8d8319f273cf385406c5a1880f3619d63f50468ba3c1 not found: ID does not exist" containerID="c8349cc4ddba1e81ef0d8d8319f273cf385406c5a1880f3619d63f50468ba3c1" Dec 15 06:03:56 crc kubenswrapper[4747]: I1215 06:03:56.762583 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8349cc4ddba1e81ef0d8d8319f273cf385406c5a1880f3619d63f50468ba3c1"} err="failed to get container status \"c8349cc4ddba1e81ef0d8d8319f273cf385406c5a1880f3619d63f50468ba3c1\": rpc error: code = NotFound desc = could not find container \"c8349cc4ddba1e81ef0d8d8319f273cf385406c5a1880f3619d63f50468ba3c1\": container with ID starting with c8349cc4ddba1e81ef0d8d8319f273cf385406c5a1880f3619d63f50468ba3c1 not found: ID does not exist" Dec 15 06:03:56 crc kubenswrapper[4747]: I1215 06:03:56.762606 4747 scope.go:117] "RemoveContainer" containerID="dc81ef17244a094d41503549456ac6eb2a233b14b94d51fdc5ad65b9cc15f714" Dec 15 06:03:56 crc kubenswrapper[4747]: E1215 06:03:56.762990 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc81ef17244a094d41503549456ac6eb2a233b14b94d51fdc5ad65b9cc15f714\": container with ID starting with dc81ef17244a094d41503549456ac6eb2a233b14b94d51fdc5ad65b9cc15f714 not found: ID does not exist" containerID="dc81ef17244a094d41503549456ac6eb2a233b14b94d51fdc5ad65b9cc15f714" Dec 15 06:03:56 crc kubenswrapper[4747]: I1215 06:03:56.763029 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc81ef17244a094d41503549456ac6eb2a233b14b94d51fdc5ad65b9cc15f714"} err="failed to get container status \"dc81ef17244a094d41503549456ac6eb2a233b14b94d51fdc5ad65b9cc15f714\": rpc error: code = NotFound desc = could not find container \"dc81ef17244a094d41503549456ac6eb2a233b14b94d51fdc5ad65b9cc15f714\": container with ID starting with dc81ef17244a094d41503549456ac6eb2a233b14b94d51fdc5ad65b9cc15f714 not found: ID does not exist" Dec 15 06:03:58 crc kubenswrapper[4747]: I1215 06:03:58.641255 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d0c1855-b6ed-4acb-a2b2-3935006720c2" path="/var/lib/kubelet/pods/7d0c1855-b6ed-4acb-a2b2-3935006720c2/volumes" Dec 15 06:04:12 crc kubenswrapper[4747]: I1215 06:04:12.029574 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-pcdnx"] Dec 15 06:04:12 crc kubenswrapper[4747]: I1215 06:04:12.036352 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-pcdnx"] Dec 15 06:04:12 crc kubenswrapper[4747]: I1215 06:04:12.640133 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="258be85d-2a31-4830-b893-b8f20560dd71" path="/var/lib/kubelet/pods/258be85d-2a31-4830-b893-b8f20560dd71/volumes" Dec 15 06:04:26 crc kubenswrapper[4747]: I1215 06:04:26.984084 4747 generic.go:334] "Generic (PLEG): container finished" podID="23f33913-7e72-4eee-bd81-3561906af7fb" containerID="0fa10d180119a2ae4e31ee370f6e76ae350cb1e53f1c2698b906311c9b72bfaf" exitCode=0 Dec 15 06:04:26 crc kubenswrapper[4747]: I1215 06:04:26.984235 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-594cx" event={"ID":"23f33913-7e72-4eee-bd81-3561906af7fb","Type":"ContainerDied","Data":"0fa10d180119a2ae4e31ee370f6e76ae350cb1e53f1c2698b906311c9b72bfaf"} Dec 15 06:04:28 crc kubenswrapper[4747]: I1215 06:04:28.343832 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-594cx" Dec 15 06:04:28 crc kubenswrapper[4747]: I1215 06:04:28.428078 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23f33913-7e72-4eee-bd81-3561906af7fb-inventory\") pod \"23f33913-7e72-4eee-bd81-3561906af7fb\" (UID: \"23f33913-7e72-4eee-bd81-3561906af7fb\") " Dec 15 06:04:28 crc kubenswrapper[4747]: I1215 06:04:28.428168 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69r8k\" (UniqueName: \"kubernetes.io/projected/23f33913-7e72-4eee-bd81-3561906af7fb-kube-api-access-69r8k\") pod \"23f33913-7e72-4eee-bd81-3561906af7fb\" (UID: \"23f33913-7e72-4eee-bd81-3561906af7fb\") " Dec 15 06:04:28 crc kubenswrapper[4747]: I1215 06:04:28.428452 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/23f33913-7e72-4eee-bd81-3561906af7fb-ssh-key\") pod \"23f33913-7e72-4eee-bd81-3561906af7fb\" (UID: \"23f33913-7e72-4eee-bd81-3561906af7fb\") " Dec 15 06:04:28 crc kubenswrapper[4747]: I1215 06:04:28.438102 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23f33913-7e72-4eee-bd81-3561906af7fb-kube-api-access-69r8k" (OuterVolumeSpecName: "kube-api-access-69r8k") pod "23f33913-7e72-4eee-bd81-3561906af7fb" (UID: "23f33913-7e72-4eee-bd81-3561906af7fb"). InnerVolumeSpecName "kube-api-access-69r8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 06:04:28 crc kubenswrapper[4747]: I1215 06:04:28.453416 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23f33913-7e72-4eee-bd81-3561906af7fb-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "23f33913-7e72-4eee-bd81-3561906af7fb" (UID: "23f33913-7e72-4eee-bd81-3561906af7fb"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:04:28 crc kubenswrapper[4747]: I1215 06:04:28.455626 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23f33913-7e72-4eee-bd81-3561906af7fb-inventory" (OuterVolumeSpecName: "inventory") pod "23f33913-7e72-4eee-bd81-3561906af7fb" (UID: "23f33913-7e72-4eee-bd81-3561906af7fb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:04:28 crc kubenswrapper[4747]: I1215 06:04:28.531891 4747 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/23f33913-7e72-4eee-bd81-3561906af7fb-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 15 06:04:28 crc kubenswrapper[4747]: I1215 06:04:28.531942 4747 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23f33913-7e72-4eee-bd81-3561906af7fb-inventory\") on node \"crc\" DevicePath \"\"" Dec 15 06:04:28 crc kubenswrapper[4747]: I1215 06:04:28.531958 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69r8k\" (UniqueName: \"kubernetes.io/projected/23f33913-7e72-4eee-bd81-3561906af7fb-kube-api-access-69r8k\") on node \"crc\" DevicePath \"\"" Dec 15 06:04:28 crc kubenswrapper[4747]: I1215 06:04:28.865849 4747 patch_prober.go:28] interesting pod/machine-config-daemon-nldtn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 15 06:04:28 crc kubenswrapper[4747]: I1215 06:04:28.866244 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 15 06:04:29 crc kubenswrapper[4747]: I1215 06:04:29.006358 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-594cx" event={"ID":"23f33913-7e72-4eee-bd81-3561906af7fb","Type":"ContainerDied","Data":"d4ffbc21317c39821d97e78c9094ff356bdbc77e67a6f2ad1133bf38261ba28a"} Dec 15 06:04:29 crc kubenswrapper[4747]: I1215 06:04:29.006408 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4ffbc21317c39821d97e78c9094ff356bdbc77e67a6f2ad1133bf38261ba28a" Dec 15 06:04:29 crc kubenswrapper[4747]: I1215 06:04:29.006455 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-594cx" Dec 15 06:04:29 crc kubenswrapper[4747]: I1215 06:04:29.070728 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-cvt9r"] Dec 15 06:04:29 crc kubenswrapper[4747]: E1215 06:04:29.071200 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23f33913-7e72-4eee-bd81-3561906af7fb" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 15 06:04:29 crc kubenswrapper[4747]: I1215 06:04:29.071220 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="23f33913-7e72-4eee-bd81-3561906af7fb" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 15 06:04:29 crc kubenswrapper[4747]: E1215 06:04:29.071230 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d0c1855-b6ed-4acb-a2b2-3935006720c2" containerName="extract-content" Dec 15 06:04:29 crc kubenswrapper[4747]: I1215 06:04:29.071237 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d0c1855-b6ed-4acb-a2b2-3935006720c2" containerName="extract-content" Dec 15 06:04:29 crc kubenswrapper[4747]: E1215 06:04:29.071256 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d0c1855-b6ed-4acb-a2b2-3935006720c2" containerName="extract-utilities" Dec 15 06:04:29 crc kubenswrapper[4747]: I1215 06:04:29.071264 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d0c1855-b6ed-4acb-a2b2-3935006720c2" containerName="extract-utilities" Dec 15 06:04:29 crc kubenswrapper[4747]: E1215 06:04:29.071278 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d0c1855-b6ed-4acb-a2b2-3935006720c2" containerName="registry-server" Dec 15 06:04:29 crc kubenswrapper[4747]: I1215 06:04:29.071284 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d0c1855-b6ed-4acb-a2b2-3935006720c2" containerName="registry-server" Dec 15 06:04:29 crc kubenswrapper[4747]: I1215 06:04:29.071449 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d0c1855-b6ed-4acb-a2b2-3935006720c2" containerName="registry-server" Dec 15 06:04:29 crc kubenswrapper[4747]: I1215 06:04:29.071467 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="23f33913-7e72-4eee-bd81-3561906af7fb" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 15 06:04:29 crc kubenswrapper[4747]: I1215 06:04:29.072131 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-cvt9r" Dec 15 06:04:29 crc kubenswrapper[4747]: I1215 06:04:29.073679 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bfv8q" Dec 15 06:04:29 crc kubenswrapper[4747]: I1215 06:04:29.073741 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 15 06:04:29 crc kubenswrapper[4747]: I1215 06:04:29.074073 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 15 06:04:29 crc kubenswrapper[4747]: I1215 06:04:29.074833 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 15 06:04:29 crc kubenswrapper[4747]: I1215 06:04:29.079062 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-cvt9r"] Dec 15 06:04:29 crc kubenswrapper[4747]: I1215 06:04:29.145079 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/adb1376f-7db9-4946-8843-44313c04df54-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-cvt9r\" (UID: \"adb1376f-7db9-4946-8843-44313c04df54\") " pod="openstack/ssh-known-hosts-edpm-deployment-cvt9r" Dec 15 06:04:29 crc kubenswrapper[4747]: I1215 06:04:29.145311 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/adb1376f-7db9-4946-8843-44313c04df54-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-cvt9r\" (UID: \"adb1376f-7db9-4946-8843-44313c04df54\") " pod="openstack/ssh-known-hosts-edpm-deployment-cvt9r" Dec 15 06:04:29 crc kubenswrapper[4747]: I1215 06:04:29.145592 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pjh2\" (UniqueName: \"kubernetes.io/projected/adb1376f-7db9-4946-8843-44313c04df54-kube-api-access-5pjh2\") pod \"ssh-known-hosts-edpm-deployment-cvt9r\" (UID: \"adb1376f-7db9-4946-8843-44313c04df54\") " pod="openstack/ssh-known-hosts-edpm-deployment-cvt9r" Dec 15 06:04:29 crc kubenswrapper[4747]: I1215 06:04:29.248029 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pjh2\" (UniqueName: \"kubernetes.io/projected/adb1376f-7db9-4946-8843-44313c04df54-kube-api-access-5pjh2\") pod \"ssh-known-hosts-edpm-deployment-cvt9r\" (UID: \"adb1376f-7db9-4946-8843-44313c04df54\") " pod="openstack/ssh-known-hosts-edpm-deployment-cvt9r" Dec 15 06:04:29 crc kubenswrapper[4747]: I1215 06:04:29.248161 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/adb1376f-7db9-4946-8843-44313c04df54-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-cvt9r\" (UID: \"adb1376f-7db9-4946-8843-44313c04df54\") " pod="openstack/ssh-known-hosts-edpm-deployment-cvt9r" Dec 15 06:04:29 crc kubenswrapper[4747]: I1215 06:04:29.248236 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/adb1376f-7db9-4946-8843-44313c04df54-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-cvt9r\" (UID: \"adb1376f-7db9-4946-8843-44313c04df54\") " pod="openstack/ssh-known-hosts-edpm-deployment-cvt9r" Dec 15 06:04:29 crc kubenswrapper[4747]: I1215 06:04:29.257292 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/adb1376f-7db9-4946-8843-44313c04df54-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-cvt9r\" (UID: \"adb1376f-7db9-4946-8843-44313c04df54\") " pod="openstack/ssh-known-hosts-edpm-deployment-cvt9r" Dec 15 06:04:29 crc kubenswrapper[4747]: I1215 06:04:29.257324 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/adb1376f-7db9-4946-8843-44313c04df54-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-cvt9r\" (UID: \"adb1376f-7db9-4946-8843-44313c04df54\") " pod="openstack/ssh-known-hosts-edpm-deployment-cvt9r" Dec 15 06:04:29 crc kubenswrapper[4747]: I1215 06:04:29.264016 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pjh2\" (UniqueName: \"kubernetes.io/projected/adb1376f-7db9-4946-8843-44313c04df54-kube-api-access-5pjh2\") pod \"ssh-known-hosts-edpm-deployment-cvt9r\" (UID: \"adb1376f-7db9-4946-8843-44313c04df54\") " pod="openstack/ssh-known-hosts-edpm-deployment-cvt9r" Dec 15 06:04:29 crc kubenswrapper[4747]: I1215 06:04:29.392775 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-cvt9r" Dec 15 06:04:29 crc kubenswrapper[4747]: I1215 06:04:29.872662 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-cvt9r"] Dec 15 06:04:30 crc kubenswrapper[4747]: I1215 06:04:30.017472 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-cvt9r" event={"ID":"adb1376f-7db9-4946-8843-44313c04df54","Type":"ContainerStarted","Data":"de0058af72d9cf9554bd36ef383803d25b8c83894c54cae07062e4b2f8eda57a"} Dec 15 06:04:31 crc kubenswrapper[4747]: I1215 06:04:31.027661 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-cvt9r" event={"ID":"adb1376f-7db9-4946-8843-44313c04df54","Type":"ContainerStarted","Data":"9b0f202b7e48717d1892d77a0be5e359b81791c12f073939fb9dbf7e21748ef3"} Dec 15 06:04:31 crc kubenswrapper[4747]: I1215 06:04:31.055859 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-cvt9r" podStartSLOduration=1.410251478 podStartE2EDuration="2.055841691s" podCreationTimestamp="2025-12-15 06:04:29 +0000 UTC" firstStartedPulling="2025-12-15 06:04:29.872837207 +0000 UTC m=+1633.569349123" lastFinishedPulling="2025-12-15 06:04:30.518427419 +0000 UTC m=+1634.214939336" observedRunningTime="2025-12-15 06:04:31.055527361 +0000 UTC m=+1634.752039277" watchObservedRunningTime="2025-12-15 06:04:31.055841691 +0000 UTC m=+1634.752353599" Dec 15 06:04:32 crc kubenswrapper[4747]: I1215 06:04:32.047064 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-9xp4w"] Dec 15 06:04:32 crc kubenswrapper[4747]: I1215 06:04:32.055944 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-p84db"] Dec 15 06:04:32 crc kubenswrapper[4747]: I1215 06:04:32.064474 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-p84db"] Dec 15 06:04:32 crc kubenswrapper[4747]: I1215 06:04:32.070956 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-9xp4w"] Dec 15 06:04:32 crc kubenswrapper[4747]: I1215 06:04:32.643278 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9eb96a2-d315-4936-96e4-0be39cf72b0a" path="/var/lib/kubelet/pods/e9eb96a2-d315-4936-96e4-0be39cf72b0a/volumes" Dec 15 06:04:32 crc kubenswrapper[4747]: I1215 06:04:32.644268 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6" path="/var/lib/kubelet/pods/edecc2ab-d1b8-4dac-bc1c-825e65c0aaa6/volumes" Dec 15 06:04:32 crc kubenswrapper[4747]: I1215 06:04:32.791546 4747 scope.go:117] "RemoveContainer" containerID="e6abf5d627df5f52b5d6a5057119f4e93d31d86d2d2362b069708672cca46d44" Dec 15 06:04:32 crc kubenswrapper[4747]: I1215 06:04:32.844129 4747 scope.go:117] "RemoveContainer" containerID="e810a50b16e0881729df1f9e177fda73e8d0d5f4dbeefab0fdeccaad1023164d" Dec 15 06:04:32 crc kubenswrapper[4747]: I1215 06:04:32.867581 4747 scope.go:117] "RemoveContainer" containerID="580f497568c8c4f93f713faa404a57739a9c47b7e782003d2c969f608c02b90e" Dec 15 06:04:32 crc kubenswrapper[4747]: I1215 06:04:32.918586 4747 scope.go:117] "RemoveContainer" containerID="e6032e14ced0acb946136bfffaa2d72e396d3f688f6660a947f811def5d1a680" Dec 15 06:04:32 crc kubenswrapper[4747]: I1215 06:04:32.942219 4747 scope.go:117] "RemoveContainer" containerID="3d1a741cc555aac767160793911a42f77314b91f4f076166844e077632d56197" Dec 15 06:04:32 crc kubenswrapper[4747]: I1215 06:04:32.992381 4747 scope.go:117] "RemoveContainer" containerID="aaa027b22a2ce0770419f04315fb0731b984d09bbd3214a8278d1d510a94714c" Dec 15 06:04:33 crc kubenswrapper[4747]: I1215 06:04:33.039533 4747 scope.go:117] "RemoveContainer" containerID="96331b574620b4ea0a307673cddaea687954765986a0af72ca26326fe203e840" Dec 15 06:04:33 crc kubenswrapper[4747]: I1215 06:04:33.066254 4747 scope.go:117] "RemoveContainer" containerID="8a2204e885c93c3a32600c667ccdee090d373222a7bbbd8f9129a9d3689958d7" Dec 15 06:04:33 crc kubenswrapper[4747]: I1215 06:04:33.089558 4747 scope.go:117] "RemoveContainer" containerID="178d5b50270826bfe9b0439a5c6c151f3ebf51b85256c9a442ab371b84c67ab3" Dec 15 06:04:36 crc kubenswrapper[4747]: I1215 06:04:36.092990 4747 generic.go:334] "Generic (PLEG): container finished" podID="adb1376f-7db9-4946-8843-44313c04df54" containerID="9b0f202b7e48717d1892d77a0be5e359b81791c12f073939fb9dbf7e21748ef3" exitCode=0 Dec 15 06:04:36 crc kubenswrapper[4747]: I1215 06:04:36.093623 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-cvt9r" event={"ID":"adb1376f-7db9-4946-8843-44313c04df54","Type":"ContainerDied","Data":"9b0f202b7e48717d1892d77a0be5e359b81791c12f073939fb9dbf7e21748ef3"} Dec 15 06:04:37 crc kubenswrapper[4747]: I1215 06:04:37.446186 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-cvt9r" Dec 15 06:04:37 crc kubenswrapper[4747]: I1215 06:04:37.527744 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pjh2\" (UniqueName: \"kubernetes.io/projected/adb1376f-7db9-4946-8843-44313c04df54-kube-api-access-5pjh2\") pod \"adb1376f-7db9-4946-8843-44313c04df54\" (UID: \"adb1376f-7db9-4946-8843-44313c04df54\") " Dec 15 06:04:37 crc kubenswrapper[4747]: I1215 06:04:37.527906 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/adb1376f-7db9-4946-8843-44313c04df54-ssh-key-openstack-edpm-ipam\") pod \"adb1376f-7db9-4946-8843-44313c04df54\" (UID: \"adb1376f-7db9-4946-8843-44313c04df54\") " Dec 15 06:04:37 crc kubenswrapper[4747]: I1215 06:04:37.528102 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/adb1376f-7db9-4946-8843-44313c04df54-inventory-0\") pod \"adb1376f-7db9-4946-8843-44313c04df54\" (UID: \"adb1376f-7db9-4946-8843-44313c04df54\") " Dec 15 06:04:37 crc kubenswrapper[4747]: I1215 06:04:37.534991 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adb1376f-7db9-4946-8843-44313c04df54-kube-api-access-5pjh2" (OuterVolumeSpecName: "kube-api-access-5pjh2") pod "adb1376f-7db9-4946-8843-44313c04df54" (UID: "adb1376f-7db9-4946-8843-44313c04df54"). InnerVolumeSpecName "kube-api-access-5pjh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 06:04:37 crc kubenswrapper[4747]: I1215 06:04:37.552852 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adb1376f-7db9-4946-8843-44313c04df54-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "adb1376f-7db9-4946-8843-44313c04df54" (UID: "adb1376f-7db9-4946-8843-44313c04df54"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:04:37 crc kubenswrapper[4747]: I1215 06:04:37.553245 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adb1376f-7db9-4946-8843-44313c04df54-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "adb1376f-7db9-4946-8843-44313c04df54" (UID: "adb1376f-7db9-4946-8843-44313c04df54"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:04:37 crc kubenswrapper[4747]: I1215 06:04:37.631322 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pjh2\" (UniqueName: \"kubernetes.io/projected/adb1376f-7db9-4946-8843-44313c04df54-kube-api-access-5pjh2\") on node \"crc\" DevicePath \"\"" Dec 15 06:04:37 crc kubenswrapper[4747]: I1215 06:04:37.631356 4747 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/adb1376f-7db9-4946-8843-44313c04df54-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 15 06:04:37 crc kubenswrapper[4747]: I1215 06:04:37.631371 4747 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/adb1376f-7db9-4946-8843-44313c04df54-inventory-0\") on node \"crc\" DevicePath \"\"" Dec 15 06:04:38 crc kubenswrapper[4747]: I1215 06:04:38.117519 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-cvt9r" event={"ID":"adb1376f-7db9-4946-8843-44313c04df54","Type":"ContainerDied","Data":"de0058af72d9cf9554bd36ef383803d25b8c83894c54cae07062e4b2f8eda57a"} Dec 15 06:04:38 crc kubenswrapper[4747]: I1215 06:04:38.117575 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de0058af72d9cf9554bd36ef383803d25b8c83894c54cae07062e4b2f8eda57a" Dec 15 06:04:38 crc kubenswrapper[4747]: I1215 06:04:38.117591 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-cvt9r" Dec 15 06:04:38 crc kubenswrapper[4747]: I1215 06:04:38.184081 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-jbxq5"] Dec 15 06:04:38 crc kubenswrapper[4747]: E1215 06:04:38.184496 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adb1376f-7db9-4946-8843-44313c04df54" containerName="ssh-known-hosts-edpm-deployment" Dec 15 06:04:38 crc kubenswrapper[4747]: I1215 06:04:38.184517 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="adb1376f-7db9-4946-8843-44313c04df54" containerName="ssh-known-hosts-edpm-deployment" Dec 15 06:04:38 crc kubenswrapper[4747]: I1215 06:04:38.184742 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="adb1376f-7db9-4946-8843-44313c04df54" containerName="ssh-known-hosts-edpm-deployment" Dec 15 06:04:38 crc kubenswrapper[4747]: I1215 06:04:38.185399 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jbxq5" Dec 15 06:04:38 crc kubenswrapper[4747]: I1215 06:04:38.188802 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bfv8q" Dec 15 06:04:38 crc kubenswrapper[4747]: I1215 06:04:38.189038 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 15 06:04:38 crc kubenswrapper[4747]: I1215 06:04:38.189068 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 15 06:04:38 crc kubenswrapper[4747]: I1215 06:04:38.189124 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 15 06:04:38 crc kubenswrapper[4747]: I1215 06:04:38.190799 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-jbxq5"] Dec 15 06:04:38 crc kubenswrapper[4747]: I1215 06:04:38.346314 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3be348eb-7098-4347-b98e-dcf987dd854e-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jbxq5\" (UID: \"3be348eb-7098-4347-b98e-dcf987dd854e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jbxq5" Dec 15 06:04:38 crc kubenswrapper[4747]: I1215 06:04:38.346679 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3be348eb-7098-4347-b98e-dcf987dd854e-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jbxq5\" (UID: \"3be348eb-7098-4347-b98e-dcf987dd854e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jbxq5" Dec 15 06:04:38 crc kubenswrapper[4747]: I1215 06:04:38.346744 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9hjl\" (UniqueName: \"kubernetes.io/projected/3be348eb-7098-4347-b98e-dcf987dd854e-kube-api-access-q9hjl\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jbxq5\" (UID: \"3be348eb-7098-4347-b98e-dcf987dd854e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jbxq5" Dec 15 06:04:38 crc kubenswrapper[4747]: I1215 06:04:38.449950 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3be348eb-7098-4347-b98e-dcf987dd854e-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jbxq5\" (UID: \"3be348eb-7098-4347-b98e-dcf987dd854e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jbxq5" Dec 15 06:04:38 crc kubenswrapper[4747]: I1215 06:04:38.450354 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3be348eb-7098-4347-b98e-dcf987dd854e-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jbxq5\" (UID: \"3be348eb-7098-4347-b98e-dcf987dd854e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jbxq5" Dec 15 06:04:38 crc kubenswrapper[4747]: I1215 06:04:38.450425 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9hjl\" (UniqueName: \"kubernetes.io/projected/3be348eb-7098-4347-b98e-dcf987dd854e-kube-api-access-q9hjl\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jbxq5\" (UID: \"3be348eb-7098-4347-b98e-dcf987dd854e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jbxq5" Dec 15 06:04:38 crc kubenswrapper[4747]: I1215 06:04:38.456130 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3be348eb-7098-4347-b98e-dcf987dd854e-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jbxq5\" (UID: \"3be348eb-7098-4347-b98e-dcf987dd854e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jbxq5" Dec 15 06:04:38 crc kubenswrapper[4747]: I1215 06:04:38.456774 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3be348eb-7098-4347-b98e-dcf987dd854e-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jbxq5\" (UID: \"3be348eb-7098-4347-b98e-dcf987dd854e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jbxq5" Dec 15 06:04:38 crc kubenswrapper[4747]: I1215 06:04:38.466203 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9hjl\" (UniqueName: \"kubernetes.io/projected/3be348eb-7098-4347-b98e-dcf987dd854e-kube-api-access-q9hjl\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jbxq5\" (UID: \"3be348eb-7098-4347-b98e-dcf987dd854e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jbxq5" Dec 15 06:04:38 crc kubenswrapper[4747]: I1215 06:04:38.504877 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jbxq5" Dec 15 06:04:39 crc kubenswrapper[4747]: I1215 06:04:39.008041 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-jbxq5"] Dec 15 06:04:39 crc kubenswrapper[4747]: I1215 06:04:39.126736 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jbxq5" event={"ID":"3be348eb-7098-4347-b98e-dcf987dd854e","Type":"ContainerStarted","Data":"68a692dbf82d4bc61f6372b4df2f64566351161301166342c22a51d568f8cc57"} Dec 15 06:04:40 crc kubenswrapper[4747]: I1215 06:04:40.136833 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jbxq5" event={"ID":"3be348eb-7098-4347-b98e-dcf987dd854e","Type":"ContainerStarted","Data":"2241fd4b8234a083e8cc4443e8fbc9ed93d8c2985f05967622a82dd7eb0e5c18"} Dec 15 06:04:40 crc kubenswrapper[4747]: I1215 06:04:40.161576 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jbxq5" podStartSLOduration=1.420420353 podStartE2EDuration="2.161549276s" podCreationTimestamp="2025-12-15 06:04:38 +0000 UTC" firstStartedPulling="2025-12-15 06:04:39.013153829 +0000 UTC m=+1642.709665745" lastFinishedPulling="2025-12-15 06:04:39.754282751 +0000 UTC m=+1643.450794668" observedRunningTime="2025-12-15 06:04:40.150538233 +0000 UTC m=+1643.847050151" watchObservedRunningTime="2025-12-15 06:04:40.161549276 +0000 UTC m=+1643.858061193" Dec 15 06:04:46 crc kubenswrapper[4747]: I1215 06:04:46.193266 4747 generic.go:334] "Generic (PLEG): container finished" podID="3be348eb-7098-4347-b98e-dcf987dd854e" containerID="2241fd4b8234a083e8cc4443e8fbc9ed93d8c2985f05967622a82dd7eb0e5c18" exitCode=0 Dec 15 06:04:46 crc kubenswrapper[4747]: I1215 06:04:46.193374 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jbxq5" event={"ID":"3be348eb-7098-4347-b98e-dcf987dd854e","Type":"ContainerDied","Data":"2241fd4b8234a083e8cc4443e8fbc9ed93d8c2985f05967622a82dd7eb0e5c18"} Dec 15 06:04:47 crc kubenswrapper[4747]: I1215 06:04:47.521775 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jbxq5" Dec 15 06:04:47 crc kubenswrapper[4747]: I1215 06:04:47.655585 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3be348eb-7098-4347-b98e-dcf987dd854e-inventory\") pod \"3be348eb-7098-4347-b98e-dcf987dd854e\" (UID: \"3be348eb-7098-4347-b98e-dcf987dd854e\") " Dec 15 06:04:47 crc kubenswrapper[4747]: I1215 06:04:47.655700 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9hjl\" (UniqueName: \"kubernetes.io/projected/3be348eb-7098-4347-b98e-dcf987dd854e-kube-api-access-q9hjl\") pod \"3be348eb-7098-4347-b98e-dcf987dd854e\" (UID: \"3be348eb-7098-4347-b98e-dcf987dd854e\") " Dec 15 06:04:47 crc kubenswrapper[4747]: I1215 06:04:47.655754 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3be348eb-7098-4347-b98e-dcf987dd854e-ssh-key\") pod \"3be348eb-7098-4347-b98e-dcf987dd854e\" (UID: \"3be348eb-7098-4347-b98e-dcf987dd854e\") " Dec 15 06:04:47 crc kubenswrapper[4747]: I1215 06:04:47.662973 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3be348eb-7098-4347-b98e-dcf987dd854e-kube-api-access-q9hjl" (OuterVolumeSpecName: "kube-api-access-q9hjl") pod "3be348eb-7098-4347-b98e-dcf987dd854e" (UID: "3be348eb-7098-4347-b98e-dcf987dd854e"). InnerVolumeSpecName "kube-api-access-q9hjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 06:04:47 crc kubenswrapper[4747]: I1215 06:04:47.684257 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3be348eb-7098-4347-b98e-dcf987dd854e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3be348eb-7098-4347-b98e-dcf987dd854e" (UID: "3be348eb-7098-4347-b98e-dcf987dd854e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:04:47 crc kubenswrapper[4747]: I1215 06:04:47.684737 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3be348eb-7098-4347-b98e-dcf987dd854e-inventory" (OuterVolumeSpecName: "inventory") pod "3be348eb-7098-4347-b98e-dcf987dd854e" (UID: "3be348eb-7098-4347-b98e-dcf987dd854e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:04:47 crc kubenswrapper[4747]: I1215 06:04:47.758632 4747 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3be348eb-7098-4347-b98e-dcf987dd854e-inventory\") on node \"crc\" DevicePath \"\"" Dec 15 06:04:47 crc kubenswrapper[4747]: I1215 06:04:47.758657 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9hjl\" (UniqueName: \"kubernetes.io/projected/3be348eb-7098-4347-b98e-dcf987dd854e-kube-api-access-q9hjl\") on node \"crc\" DevicePath \"\"" Dec 15 06:04:47 crc kubenswrapper[4747]: I1215 06:04:47.758671 4747 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3be348eb-7098-4347-b98e-dcf987dd854e-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 15 06:04:48 crc kubenswrapper[4747]: I1215 06:04:48.213616 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jbxq5" event={"ID":"3be348eb-7098-4347-b98e-dcf987dd854e","Type":"ContainerDied","Data":"68a692dbf82d4bc61f6372b4df2f64566351161301166342c22a51d568f8cc57"} Dec 15 06:04:48 crc kubenswrapper[4747]: I1215 06:04:48.213965 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68a692dbf82d4bc61f6372b4df2f64566351161301166342c22a51d568f8cc57" Dec 15 06:04:48 crc kubenswrapper[4747]: I1215 06:04:48.213679 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jbxq5" Dec 15 06:04:48 crc kubenswrapper[4747]: I1215 06:04:48.277331 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lr6h9"] Dec 15 06:04:48 crc kubenswrapper[4747]: E1215 06:04:48.277788 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3be348eb-7098-4347-b98e-dcf987dd854e" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 15 06:04:48 crc kubenswrapper[4747]: I1215 06:04:48.277809 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="3be348eb-7098-4347-b98e-dcf987dd854e" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 15 06:04:48 crc kubenswrapper[4747]: I1215 06:04:48.278028 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="3be348eb-7098-4347-b98e-dcf987dd854e" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 15 06:04:48 crc kubenswrapper[4747]: I1215 06:04:48.278683 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lr6h9" Dec 15 06:04:48 crc kubenswrapper[4747]: I1215 06:04:48.280458 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 15 06:04:48 crc kubenswrapper[4747]: I1215 06:04:48.280721 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bfv8q" Dec 15 06:04:48 crc kubenswrapper[4747]: I1215 06:04:48.281030 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 15 06:04:48 crc kubenswrapper[4747]: I1215 06:04:48.281130 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 15 06:04:48 crc kubenswrapper[4747]: I1215 06:04:48.293171 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lr6h9"] Dec 15 06:04:48 crc kubenswrapper[4747]: I1215 06:04:48.368645 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d87302aa-4741-47b7-8126-aaeeb74ace60-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lr6h9\" (UID: \"d87302aa-4741-47b7-8126-aaeeb74ace60\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lr6h9" Dec 15 06:04:48 crc kubenswrapper[4747]: I1215 06:04:48.368707 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d87302aa-4741-47b7-8126-aaeeb74ace60-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lr6h9\" (UID: \"d87302aa-4741-47b7-8126-aaeeb74ace60\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lr6h9" Dec 15 06:04:48 crc kubenswrapper[4747]: I1215 06:04:48.369167 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q49b\" (UniqueName: \"kubernetes.io/projected/d87302aa-4741-47b7-8126-aaeeb74ace60-kube-api-access-2q49b\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lr6h9\" (UID: \"d87302aa-4741-47b7-8126-aaeeb74ace60\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lr6h9" Dec 15 06:04:48 crc kubenswrapper[4747]: I1215 06:04:48.472019 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q49b\" (UniqueName: \"kubernetes.io/projected/d87302aa-4741-47b7-8126-aaeeb74ace60-kube-api-access-2q49b\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lr6h9\" (UID: \"d87302aa-4741-47b7-8126-aaeeb74ace60\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lr6h9" Dec 15 06:04:48 crc kubenswrapper[4747]: I1215 06:04:48.472235 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d87302aa-4741-47b7-8126-aaeeb74ace60-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lr6h9\" (UID: \"d87302aa-4741-47b7-8126-aaeeb74ace60\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lr6h9" Dec 15 06:04:48 crc kubenswrapper[4747]: I1215 06:04:48.472300 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d87302aa-4741-47b7-8126-aaeeb74ace60-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lr6h9\" (UID: \"d87302aa-4741-47b7-8126-aaeeb74ace60\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lr6h9" Dec 15 06:04:48 crc kubenswrapper[4747]: I1215 06:04:48.478821 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d87302aa-4741-47b7-8126-aaeeb74ace60-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lr6h9\" (UID: \"d87302aa-4741-47b7-8126-aaeeb74ace60\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lr6h9" Dec 15 06:04:48 crc kubenswrapper[4747]: I1215 06:04:48.479475 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d87302aa-4741-47b7-8126-aaeeb74ace60-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lr6h9\" (UID: \"d87302aa-4741-47b7-8126-aaeeb74ace60\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lr6h9" Dec 15 06:04:48 crc kubenswrapper[4747]: I1215 06:04:48.487988 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q49b\" (UniqueName: \"kubernetes.io/projected/d87302aa-4741-47b7-8126-aaeeb74ace60-kube-api-access-2q49b\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lr6h9\" (UID: \"d87302aa-4741-47b7-8126-aaeeb74ace60\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lr6h9" Dec 15 06:04:48 crc kubenswrapper[4747]: I1215 06:04:48.596904 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lr6h9" Dec 15 06:04:49 crc kubenswrapper[4747]: I1215 06:04:49.076428 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lr6h9"] Dec 15 06:04:49 crc kubenswrapper[4747]: I1215 06:04:49.228654 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lr6h9" event={"ID":"d87302aa-4741-47b7-8126-aaeeb74ace60","Type":"ContainerStarted","Data":"5356aee00d424bd64b1632e9688ad699bacca1441bc59785f07b985b280d13a3"} Dec 15 06:04:50 crc kubenswrapper[4747]: I1215 06:04:50.040091 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-vxw6d"] Dec 15 06:04:50 crc kubenswrapper[4747]: I1215 06:04:50.050722 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-vxw6d"] Dec 15 06:04:50 crc kubenswrapper[4747]: I1215 06:04:50.242083 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lr6h9" event={"ID":"d87302aa-4741-47b7-8126-aaeeb74ace60","Type":"ContainerStarted","Data":"26571491d5bf2f752dd5b9d9d05b7e3992a3ec678781de3d68caaa92115fd002"} Dec 15 06:04:50 crc kubenswrapper[4747]: I1215 06:04:50.268311 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lr6h9" podStartSLOduration=1.64984109 podStartE2EDuration="2.268290465s" podCreationTimestamp="2025-12-15 06:04:48 +0000 UTC" firstStartedPulling="2025-12-15 06:04:49.082972089 +0000 UTC m=+1652.779484007" lastFinishedPulling="2025-12-15 06:04:49.701421465 +0000 UTC m=+1653.397933382" observedRunningTime="2025-12-15 06:04:50.256802276 +0000 UTC m=+1653.953314194" watchObservedRunningTime="2025-12-15 06:04:50.268290465 +0000 UTC m=+1653.964802382" Dec 15 06:04:50 crc kubenswrapper[4747]: I1215 06:04:50.652435 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b819ab60-7712-47d0-853b-4ae39eb770b1" path="/var/lib/kubelet/pods/b819ab60-7712-47d0-853b-4ae39eb770b1/volumes" Dec 15 06:04:57 crc kubenswrapper[4747]: I1215 06:04:57.307543 4747 generic.go:334] "Generic (PLEG): container finished" podID="d87302aa-4741-47b7-8126-aaeeb74ace60" containerID="26571491d5bf2f752dd5b9d9d05b7e3992a3ec678781de3d68caaa92115fd002" exitCode=0 Dec 15 06:04:57 crc kubenswrapper[4747]: I1215 06:04:57.307637 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lr6h9" event={"ID":"d87302aa-4741-47b7-8126-aaeeb74ace60","Type":"ContainerDied","Data":"26571491d5bf2f752dd5b9d9d05b7e3992a3ec678781de3d68caaa92115fd002"} Dec 15 06:04:58 crc kubenswrapper[4747]: I1215 06:04:58.716502 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lr6h9" Dec 15 06:04:58 crc kubenswrapper[4747]: I1215 06:04:58.865150 4747 patch_prober.go:28] interesting pod/machine-config-daemon-nldtn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 15 06:04:58 crc kubenswrapper[4747]: I1215 06:04:58.865248 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 15 06:04:58 crc kubenswrapper[4747]: I1215 06:04:58.893797 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d87302aa-4741-47b7-8126-aaeeb74ace60-ssh-key\") pod \"d87302aa-4741-47b7-8126-aaeeb74ace60\" (UID: \"d87302aa-4741-47b7-8126-aaeeb74ace60\") " Dec 15 06:04:58 crc kubenswrapper[4747]: I1215 06:04:58.894138 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d87302aa-4741-47b7-8126-aaeeb74ace60-inventory\") pod \"d87302aa-4741-47b7-8126-aaeeb74ace60\" (UID: \"d87302aa-4741-47b7-8126-aaeeb74ace60\") " Dec 15 06:04:58 crc kubenswrapper[4747]: I1215 06:04:58.894245 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2q49b\" (UniqueName: \"kubernetes.io/projected/d87302aa-4741-47b7-8126-aaeeb74ace60-kube-api-access-2q49b\") pod \"d87302aa-4741-47b7-8126-aaeeb74ace60\" (UID: \"d87302aa-4741-47b7-8126-aaeeb74ace60\") " Dec 15 06:04:58 crc kubenswrapper[4747]: I1215 06:04:58.900457 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d87302aa-4741-47b7-8126-aaeeb74ace60-kube-api-access-2q49b" (OuterVolumeSpecName: "kube-api-access-2q49b") pod "d87302aa-4741-47b7-8126-aaeeb74ace60" (UID: "d87302aa-4741-47b7-8126-aaeeb74ace60"). InnerVolumeSpecName "kube-api-access-2q49b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 06:04:58 crc kubenswrapper[4747]: I1215 06:04:58.919272 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d87302aa-4741-47b7-8126-aaeeb74ace60-inventory" (OuterVolumeSpecName: "inventory") pod "d87302aa-4741-47b7-8126-aaeeb74ace60" (UID: "d87302aa-4741-47b7-8126-aaeeb74ace60"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:04:58 crc kubenswrapper[4747]: I1215 06:04:58.920097 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d87302aa-4741-47b7-8126-aaeeb74ace60-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d87302aa-4741-47b7-8126-aaeeb74ace60" (UID: "d87302aa-4741-47b7-8126-aaeeb74ace60"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:04:58 crc kubenswrapper[4747]: I1215 06:04:58.996776 4747 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d87302aa-4741-47b7-8126-aaeeb74ace60-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 15 06:04:58 crc kubenswrapper[4747]: I1215 06:04:58.996903 4747 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d87302aa-4741-47b7-8126-aaeeb74ace60-inventory\") on node \"crc\" DevicePath \"\"" Dec 15 06:04:58 crc kubenswrapper[4747]: I1215 06:04:58.997004 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2q49b\" (UniqueName: \"kubernetes.io/projected/d87302aa-4741-47b7-8126-aaeeb74ace60-kube-api-access-2q49b\") on node \"crc\" DevicePath \"\"" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.326896 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lr6h9" event={"ID":"d87302aa-4741-47b7-8126-aaeeb74ace60","Type":"ContainerDied","Data":"5356aee00d424bd64b1632e9688ad699bacca1441bc59785f07b985b280d13a3"} Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.326980 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lr6h9" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.327003 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5356aee00d424bd64b1632e9688ad699bacca1441bc59785f07b985b280d13a3" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.401287 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb"] Dec 15 06:04:59 crc kubenswrapper[4747]: E1215 06:04:59.402442 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d87302aa-4741-47b7-8126-aaeeb74ace60" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.402536 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="d87302aa-4741-47b7-8126-aaeeb74ace60" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.403048 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="d87302aa-4741-47b7-8126-aaeeb74ace60" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.404214 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.409771 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.410164 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.410621 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bfv8q" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.410841 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.411078 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.411229 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.411428 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.412618 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.419854 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb"] Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.514269 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tftvb\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.514381 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tftvb\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.514615 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tftvb\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.514662 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tftvb\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.514915 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tftvb\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.515076 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tftvb\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.515163 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tftvb\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.515391 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tftvb\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.515486 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tgtj\" (UniqueName: \"kubernetes.io/projected/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-kube-api-access-5tgtj\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tftvb\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.515545 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tftvb\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.515587 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tftvb\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.515684 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tftvb\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.515753 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tftvb\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.515825 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tftvb\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.617914 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tftvb\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.618042 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tftvb\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.618092 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tgtj\" (UniqueName: \"kubernetes.io/projected/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-kube-api-access-5tgtj\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tftvb\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.618121 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tftvb\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.618148 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tftvb\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.618183 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tftvb\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.618223 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tftvb\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.618255 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tftvb\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.618319 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tftvb\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.618361 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tftvb\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.618428 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tftvb\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.618459 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tftvb\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.618522 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tftvb\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.618562 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tftvb\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.622712 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tftvb\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.623044 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tftvb\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.623519 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tftvb\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.624172 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tftvb\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.624194 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tftvb\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.624175 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tftvb\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.624771 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tftvb\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.625302 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tftvb\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.625900 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tftvb\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.626312 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tftvb\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.626733 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tftvb\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.627304 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tftvb\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.627784 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tftvb\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.633802 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tgtj\" (UniqueName: \"kubernetes.io/projected/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-kube-api-access-5tgtj\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tftvb\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" Dec 15 06:04:59 crc kubenswrapper[4747]: I1215 06:04:59.728325 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" Dec 15 06:05:00 crc kubenswrapper[4747]: I1215 06:05:00.203189 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb"] Dec 15 06:05:00 crc kubenswrapper[4747]: I1215 06:05:00.336738 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" event={"ID":"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826","Type":"ContainerStarted","Data":"fbcc54261d3998d576b00fc084400d3cfeec373adbb28d308b231ee2a6bc69fb"} Dec 15 06:05:01 crc kubenswrapper[4747]: I1215 06:05:01.356608 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" event={"ID":"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826","Type":"ContainerStarted","Data":"e2d942816c3f8fb5c1f7ec7c50c4b559a19843f7a4f1cf07be1919b7468c31cd"} Dec 15 06:05:28 crc kubenswrapper[4747]: I1215 06:05:28.686258 4747 generic.go:334] "Generic (PLEG): container finished" podID="4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826" containerID="e2d942816c3f8fb5c1f7ec7c50c4b559a19843f7a4f1cf07be1919b7468c31cd" exitCode=0 Dec 15 06:05:28 crc kubenswrapper[4747]: I1215 06:05:28.686357 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" event={"ID":"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826","Type":"ContainerDied","Data":"e2d942816c3f8fb5c1f7ec7c50c4b559a19843f7a4f1cf07be1919b7468c31cd"} Dec 15 06:05:28 crc kubenswrapper[4747]: I1215 06:05:28.865805 4747 patch_prober.go:28] interesting pod/machine-config-daemon-nldtn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 15 06:05:28 crc kubenswrapper[4747]: I1215 06:05:28.865957 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 15 06:05:28 crc kubenswrapper[4747]: I1215 06:05:28.866061 4747 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" Dec 15 06:05:28 crc kubenswrapper[4747]: I1215 06:05:28.867360 4747 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1fb2c7cbf1bffaa65209c28e6e7abe2bf250e060aa38bf5208a3810b074d8610"} pod="openshift-machine-config-operator/machine-config-daemon-nldtn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 15 06:05:28 crc kubenswrapper[4747]: I1215 06:05:28.867463 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" containerID="cri-o://1fb2c7cbf1bffaa65209c28e6e7abe2bf250e060aa38bf5208a3810b074d8610" gracePeriod=600 Dec 15 06:05:28 crc kubenswrapper[4747]: E1215 06:05:28.999591 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:05:29 crc kubenswrapper[4747]: I1215 06:05:29.700793 4747 generic.go:334] "Generic (PLEG): container finished" podID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerID="1fb2c7cbf1bffaa65209c28e6e7abe2bf250e060aa38bf5208a3810b074d8610" exitCode=0 Dec 15 06:05:29 crc kubenswrapper[4747]: I1215 06:05:29.700870 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" event={"ID":"1d50e5c9-7ce9-40c0-b942-01031654d27c","Type":"ContainerDied","Data":"1fb2c7cbf1bffaa65209c28e6e7abe2bf250e060aa38bf5208a3810b074d8610"} Dec 15 06:05:29 crc kubenswrapper[4747]: I1215 06:05:29.701250 4747 scope.go:117] "RemoveContainer" containerID="080353cc0a0655a6ae55c2b56c2536dc00fa2c3b98890884d8fe921591d73c15" Dec 15 06:05:29 crc kubenswrapper[4747]: I1215 06:05:29.702633 4747 scope.go:117] "RemoveContainer" containerID="1fb2c7cbf1bffaa65209c28e6e7abe2bf250e060aa38bf5208a3810b074d8610" Dec 15 06:05:29 crc kubenswrapper[4747]: E1215 06:05:29.703196 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.145035 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.252191 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-libvirt-combined-ca-bundle\") pod \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.252281 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-neutron-metadata-combined-ca-bundle\") pod \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.252342 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-bootstrap-combined-ca-bundle\") pod \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.252374 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-ovn-combined-ca-bundle\") pod \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.252483 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tgtj\" (UniqueName: \"kubernetes.io/projected/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-kube-api-access-5tgtj\") pod \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.252514 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-repo-setup-combined-ca-bundle\") pod \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.252536 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-ssh-key\") pod \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.252563 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-inventory\") pod \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.252622 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.252646 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.252707 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-telemetry-combined-ca-bundle\") pod \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.252732 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.252761 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-nova-combined-ca-bundle\") pod \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.252786 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-openstack-edpm-ipam-ovn-default-certs-0\") pod \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\" (UID: \"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826\") " Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.258635 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826" (UID: "4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.259180 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-kube-api-access-5tgtj" (OuterVolumeSpecName: "kube-api-access-5tgtj") pod "4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826" (UID: "4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826"). InnerVolumeSpecName "kube-api-access-5tgtj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.259515 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826" (UID: "4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.259942 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826" (UID: "4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.260385 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826" (UID: "4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.260408 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826" (UID: "4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.260771 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826" (UID: "4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.261888 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826" (UID: "4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.262240 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826" (UID: "4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.263067 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826" (UID: "4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.263122 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826" (UID: "4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.263357 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826" (UID: "4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.281385 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826" (UID: "4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.286447 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-inventory" (OuterVolumeSpecName: "inventory") pod "4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826" (UID: "4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.356553 4747 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.356607 4747 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.356622 4747 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.356637 4747 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.356650 4747 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.356660 4747 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.356670 4747 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.356683 4747 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.356694 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tgtj\" (UniqueName: \"kubernetes.io/projected/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-kube-api-access-5tgtj\") on node \"crc\" DevicePath \"\"" Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.356703 4747 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.356713 4747 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.356730 4747 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-inventory\") on node \"crc\" DevicePath \"\"" Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.356740 4747 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.356752 4747 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.713234 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" event={"ID":"4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826","Type":"ContainerDied","Data":"fbcc54261d3998d576b00fc084400d3cfeec373adbb28d308b231ee2a6bc69fb"} Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.713265 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tftvb" Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.713298 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbcc54261d3998d576b00fc084400d3cfeec373adbb28d308b231ee2a6bc69fb" Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.898918 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-cppqm"] Dec 15 06:05:30 crc kubenswrapper[4747]: E1215 06:05:30.899456 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.899477 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.899664 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.900423 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cppqm" Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.903952 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.904237 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.904379 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.904522 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bfv8q" Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.904640 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.906745 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-cppqm"] Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.968818 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae0fffb8-5fa6-4351-83ac-e2687b00d983-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cppqm\" (UID: \"ae0fffb8-5fa6-4351-83ac-e2687b00d983\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cppqm" Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.968869 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ae0fffb8-5fa6-4351-83ac-e2687b00d983-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cppqm\" (UID: \"ae0fffb8-5fa6-4351-83ac-e2687b00d983\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cppqm" Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.968955 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ae0fffb8-5fa6-4351-83ac-e2687b00d983-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cppqm\" (UID: \"ae0fffb8-5fa6-4351-83ac-e2687b00d983\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cppqm" Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.969106 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7dpt\" (UniqueName: \"kubernetes.io/projected/ae0fffb8-5fa6-4351-83ac-e2687b00d983-kube-api-access-w7dpt\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cppqm\" (UID: \"ae0fffb8-5fa6-4351-83ac-e2687b00d983\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cppqm" Dec 15 06:05:30 crc kubenswrapper[4747]: I1215 06:05:30.969241 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae0fffb8-5fa6-4351-83ac-e2687b00d983-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cppqm\" (UID: \"ae0fffb8-5fa6-4351-83ac-e2687b00d983\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cppqm" Dec 15 06:05:31 crc kubenswrapper[4747]: I1215 06:05:31.072355 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae0fffb8-5fa6-4351-83ac-e2687b00d983-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cppqm\" (UID: \"ae0fffb8-5fa6-4351-83ac-e2687b00d983\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cppqm" Dec 15 06:05:31 crc kubenswrapper[4747]: I1215 06:05:31.072833 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ae0fffb8-5fa6-4351-83ac-e2687b00d983-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cppqm\" (UID: \"ae0fffb8-5fa6-4351-83ac-e2687b00d983\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cppqm" Dec 15 06:05:31 crc kubenswrapper[4747]: I1215 06:05:31.073056 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ae0fffb8-5fa6-4351-83ac-e2687b00d983-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cppqm\" (UID: \"ae0fffb8-5fa6-4351-83ac-e2687b00d983\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cppqm" Dec 15 06:05:31 crc kubenswrapper[4747]: I1215 06:05:31.073292 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7dpt\" (UniqueName: \"kubernetes.io/projected/ae0fffb8-5fa6-4351-83ac-e2687b00d983-kube-api-access-w7dpt\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cppqm\" (UID: \"ae0fffb8-5fa6-4351-83ac-e2687b00d983\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cppqm" Dec 15 06:05:31 crc kubenswrapper[4747]: I1215 06:05:31.073449 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae0fffb8-5fa6-4351-83ac-e2687b00d983-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cppqm\" (UID: \"ae0fffb8-5fa6-4351-83ac-e2687b00d983\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cppqm" Dec 15 06:05:31 crc kubenswrapper[4747]: I1215 06:05:31.073816 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ae0fffb8-5fa6-4351-83ac-e2687b00d983-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cppqm\" (UID: \"ae0fffb8-5fa6-4351-83ac-e2687b00d983\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cppqm" Dec 15 06:05:31 crc kubenswrapper[4747]: I1215 06:05:31.083822 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae0fffb8-5fa6-4351-83ac-e2687b00d983-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cppqm\" (UID: \"ae0fffb8-5fa6-4351-83ac-e2687b00d983\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cppqm" Dec 15 06:05:31 crc kubenswrapper[4747]: I1215 06:05:31.084484 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ae0fffb8-5fa6-4351-83ac-e2687b00d983-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cppqm\" (UID: \"ae0fffb8-5fa6-4351-83ac-e2687b00d983\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cppqm" Dec 15 06:05:31 crc kubenswrapper[4747]: I1215 06:05:31.085335 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae0fffb8-5fa6-4351-83ac-e2687b00d983-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cppqm\" (UID: \"ae0fffb8-5fa6-4351-83ac-e2687b00d983\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cppqm" Dec 15 06:05:31 crc kubenswrapper[4747]: I1215 06:05:31.096009 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7dpt\" (UniqueName: \"kubernetes.io/projected/ae0fffb8-5fa6-4351-83ac-e2687b00d983-kube-api-access-w7dpt\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cppqm\" (UID: \"ae0fffb8-5fa6-4351-83ac-e2687b00d983\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cppqm" Dec 15 06:05:31 crc kubenswrapper[4747]: I1215 06:05:31.215070 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cppqm" Dec 15 06:05:31 crc kubenswrapper[4747]: I1215 06:05:31.708389 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-cppqm"] Dec 15 06:05:31 crc kubenswrapper[4747]: I1215 06:05:31.724634 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cppqm" event={"ID":"ae0fffb8-5fa6-4351-83ac-e2687b00d983","Type":"ContainerStarted","Data":"39847ec52079d2291178beb8a1c0f0d8f10adc544eb69aa7eabc5039d94f4454"} Dec 15 06:05:32 crc kubenswrapper[4747]: I1215 06:05:32.734796 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cppqm" event={"ID":"ae0fffb8-5fa6-4351-83ac-e2687b00d983","Type":"ContainerStarted","Data":"899a94fb9afe17006ce6bba3f2d0b13959b8e138e754802ecdb659d69b75efce"} Dec 15 06:05:32 crc kubenswrapper[4747]: I1215 06:05:32.753235 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cppqm" podStartSLOduration=1.9976736819999998 podStartE2EDuration="2.753213181s" podCreationTimestamp="2025-12-15 06:05:30 +0000 UTC" firstStartedPulling="2025-12-15 06:05:31.714686674 +0000 UTC m=+1695.411198591" lastFinishedPulling="2025-12-15 06:05:32.470226173 +0000 UTC m=+1696.166738090" observedRunningTime="2025-12-15 06:05:32.748321094 +0000 UTC m=+1696.444833011" watchObservedRunningTime="2025-12-15 06:05:32.753213181 +0000 UTC m=+1696.449725098" Dec 15 06:05:33 crc kubenswrapper[4747]: I1215 06:05:33.293777 4747 scope.go:117] "RemoveContainer" containerID="4283f361cf5c820766617f90d7b7861bcb68066c0d666b32ab51c95055ada8b8" Dec 15 06:05:45 crc kubenswrapper[4747]: I1215 06:05:45.629913 4747 scope.go:117] "RemoveContainer" containerID="1fb2c7cbf1bffaa65209c28e6e7abe2bf250e060aa38bf5208a3810b074d8610" Dec 15 06:05:45 crc kubenswrapper[4747]: E1215 06:05:45.630873 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:05:57 crc kubenswrapper[4747]: I1215 06:05:57.629097 4747 scope.go:117] "RemoveContainer" containerID="1fb2c7cbf1bffaa65209c28e6e7abe2bf250e060aa38bf5208a3810b074d8610" Dec 15 06:05:57 crc kubenswrapper[4747]: E1215 06:05:57.629755 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:06:11 crc kubenswrapper[4747]: I1215 06:06:11.629760 4747 scope.go:117] "RemoveContainer" containerID="1fb2c7cbf1bffaa65209c28e6e7abe2bf250e060aa38bf5208a3810b074d8610" Dec 15 06:06:11 crc kubenswrapper[4747]: E1215 06:06:11.630769 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:06:18 crc kubenswrapper[4747]: I1215 06:06:18.132387 4747 generic.go:334] "Generic (PLEG): container finished" podID="ae0fffb8-5fa6-4351-83ac-e2687b00d983" containerID="899a94fb9afe17006ce6bba3f2d0b13959b8e138e754802ecdb659d69b75efce" exitCode=0 Dec 15 06:06:18 crc kubenswrapper[4747]: I1215 06:06:18.132448 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cppqm" event={"ID":"ae0fffb8-5fa6-4351-83ac-e2687b00d983","Type":"ContainerDied","Data":"899a94fb9afe17006ce6bba3f2d0b13959b8e138e754802ecdb659d69b75efce"} Dec 15 06:06:19 crc kubenswrapper[4747]: I1215 06:06:19.495148 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cppqm" Dec 15 06:06:19 crc kubenswrapper[4747]: I1215 06:06:19.552197 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae0fffb8-5fa6-4351-83ac-e2687b00d983-inventory\") pod \"ae0fffb8-5fa6-4351-83ac-e2687b00d983\" (UID: \"ae0fffb8-5fa6-4351-83ac-e2687b00d983\") " Dec 15 06:06:19 crc kubenswrapper[4747]: I1215 06:06:19.552370 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae0fffb8-5fa6-4351-83ac-e2687b00d983-ovn-combined-ca-bundle\") pod \"ae0fffb8-5fa6-4351-83ac-e2687b00d983\" (UID: \"ae0fffb8-5fa6-4351-83ac-e2687b00d983\") " Dec 15 06:06:19 crc kubenswrapper[4747]: I1215 06:06:19.552498 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7dpt\" (UniqueName: \"kubernetes.io/projected/ae0fffb8-5fa6-4351-83ac-e2687b00d983-kube-api-access-w7dpt\") pod \"ae0fffb8-5fa6-4351-83ac-e2687b00d983\" (UID: \"ae0fffb8-5fa6-4351-83ac-e2687b00d983\") " Dec 15 06:06:19 crc kubenswrapper[4747]: I1215 06:06:19.552536 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ae0fffb8-5fa6-4351-83ac-e2687b00d983-ssh-key\") pod \"ae0fffb8-5fa6-4351-83ac-e2687b00d983\" (UID: \"ae0fffb8-5fa6-4351-83ac-e2687b00d983\") " Dec 15 06:06:19 crc kubenswrapper[4747]: I1215 06:06:19.552670 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ae0fffb8-5fa6-4351-83ac-e2687b00d983-ovncontroller-config-0\") pod \"ae0fffb8-5fa6-4351-83ac-e2687b00d983\" (UID: \"ae0fffb8-5fa6-4351-83ac-e2687b00d983\") " Dec 15 06:06:19 crc kubenswrapper[4747]: I1215 06:06:19.559290 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae0fffb8-5fa6-4351-83ac-e2687b00d983-kube-api-access-w7dpt" (OuterVolumeSpecName: "kube-api-access-w7dpt") pod "ae0fffb8-5fa6-4351-83ac-e2687b00d983" (UID: "ae0fffb8-5fa6-4351-83ac-e2687b00d983"). InnerVolumeSpecName "kube-api-access-w7dpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 06:06:19 crc kubenswrapper[4747]: I1215 06:06:19.574162 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae0fffb8-5fa6-4351-83ac-e2687b00d983-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "ae0fffb8-5fa6-4351-83ac-e2687b00d983" (UID: "ae0fffb8-5fa6-4351-83ac-e2687b00d983"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:06:19 crc kubenswrapper[4747]: I1215 06:06:19.582464 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae0fffb8-5fa6-4351-83ac-e2687b00d983-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "ae0fffb8-5fa6-4351-83ac-e2687b00d983" (UID: "ae0fffb8-5fa6-4351-83ac-e2687b00d983"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 06:06:19 crc kubenswrapper[4747]: I1215 06:06:19.582571 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae0fffb8-5fa6-4351-83ac-e2687b00d983-inventory" (OuterVolumeSpecName: "inventory") pod "ae0fffb8-5fa6-4351-83ac-e2687b00d983" (UID: "ae0fffb8-5fa6-4351-83ac-e2687b00d983"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:06:19 crc kubenswrapper[4747]: I1215 06:06:19.588092 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae0fffb8-5fa6-4351-83ac-e2687b00d983-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ae0fffb8-5fa6-4351-83ac-e2687b00d983" (UID: "ae0fffb8-5fa6-4351-83ac-e2687b00d983"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:06:19 crc kubenswrapper[4747]: I1215 06:06:19.655478 4747 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae0fffb8-5fa6-4351-83ac-e2687b00d983-inventory\") on node \"crc\" DevicePath \"\"" Dec 15 06:06:19 crc kubenswrapper[4747]: I1215 06:06:19.655590 4747 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae0fffb8-5fa6-4351-83ac-e2687b00d983-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 06:06:19 crc kubenswrapper[4747]: I1215 06:06:19.655664 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7dpt\" (UniqueName: \"kubernetes.io/projected/ae0fffb8-5fa6-4351-83ac-e2687b00d983-kube-api-access-w7dpt\") on node \"crc\" DevicePath \"\"" Dec 15 06:06:19 crc kubenswrapper[4747]: I1215 06:06:19.655720 4747 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ae0fffb8-5fa6-4351-83ac-e2687b00d983-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 15 06:06:19 crc kubenswrapper[4747]: I1215 06:06:19.655770 4747 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ae0fffb8-5fa6-4351-83ac-e2687b00d983-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Dec 15 06:06:20 crc kubenswrapper[4747]: I1215 06:06:20.153106 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cppqm" event={"ID":"ae0fffb8-5fa6-4351-83ac-e2687b00d983","Type":"ContainerDied","Data":"39847ec52079d2291178beb8a1c0f0d8f10adc544eb69aa7eabc5039d94f4454"} Dec 15 06:06:20 crc kubenswrapper[4747]: I1215 06:06:20.153155 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39847ec52079d2291178beb8a1c0f0d8f10adc544eb69aa7eabc5039d94f4454" Dec 15 06:06:20 crc kubenswrapper[4747]: I1215 06:06:20.153226 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cppqm" Dec 15 06:06:20 crc kubenswrapper[4747]: I1215 06:06:20.224289 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b"] Dec 15 06:06:20 crc kubenswrapper[4747]: E1215 06:06:20.224752 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae0fffb8-5fa6-4351-83ac-e2687b00d983" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 15 06:06:20 crc kubenswrapper[4747]: I1215 06:06:20.224775 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae0fffb8-5fa6-4351-83ac-e2687b00d983" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 15 06:06:20 crc kubenswrapper[4747]: I1215 06:06:20.225043 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae0fffb8-5fa6-4351-83ac-e2687b00d983" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 15 06:06:20 crc kubenswrapper[4747]: I1215 06:06:20.225705 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b" Dec 15 06:06:20 crc kubenswrapper[4747]: I1215 06:06:20.231381 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 15 06:06:20 crc kubenswrapper[4747]: I1215 06:06:20.232425 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Dec 15 06:06:20 crc kubenswrapper[4747]: I1215 06:06:20.232605 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 15 06:06:20 crc kubenswrapper[4747]: I1215 06:06:20.233285 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Dec 15 06:06:20 crc kubenswrapper[4747]: I1215 06:06:20.233733 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bfv8q" Dec 15 06:06:20 crc kubenswrapper[4747]: I1215 06:06:20.234194 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 15 06:06:20 crc kubenswrapper[4747]: I1215 06:06:20.235430 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b"] Dec 15 06:06:20 crc kubenswrapper[4747]: I1215 06:06:20.266875 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b\" (UID: \"cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b" Dec 15 06:06:20 crc kubenswrapper[4747]: I1215 06:06:20.266916 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b\" (UID: \"cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b" Dec 15 06:06:20 crc kubenswrapper[4747]: I1215 06:06:20.266971 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b\" (UID: \"cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b" Dec 15 06:06:20 crc kubenswrapper[4747]: I1215 06:06:20.267037 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b\" (UID: \"cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b" Dec 15 06:06:20 crc kubenswrapper[4747]: I1215 06:06:20.267062 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq7g8\" (UniqueName: \"kubernetes.io/projected/cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4-kube-api-access-zq7g8\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b\" (UID: \"cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b" Dec 15 06:06:20 crc kubenswrapper[4747]: I1215 06:06:20.267227 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b\" (UID: \"cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b" Dec 15 06:06:20 crc kubenswrapper[4747]: I1215 06:06:20.369436 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b\" (UID: \"cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b" Dec 15 06:06:20 crc kubenswrapper[4747]: I1215 06:06:20.369489 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b\" (UID: \"cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b" Dec 15 06:06:20 crc kubenswrapper[4747]: I1215 06:06:20.369532 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b\" (UID: \"cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b" Dec 15 06:06:20 crc kubenswrapper[4747]: I1215 06:06:20.369564 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b\" (UID: \"cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b" Dec 15 06:06:20 crc kubenswrapper[4747]: I1215 06:06:20.369586 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zq7g8\" (UniqueName: \"kubernetes.io/projected/cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4-kube-api-access-zq7g8\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b\" (UID: \"cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b" Dec 15 06:06:20 crc kubenswrapper[4747]: I1215 06:06:20.369743 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b\" (UID: \"cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b" Dec 15 06:06:20 crc kubenswrapper[4747]: I1215 06:06:20.374410 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b\" (UID: \"cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b" Dec 15 06:06:20 crc kubenswrapper[4747]: I1215 06:06:20.374566 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b\" (UID: \"cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b" Dec 15 06:06:20 crc kubenswrapper[4747]: I1215 06:06:20.374821 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b\" (UID: \"cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b" Dec 15 06:06:20 crc kubenswrapper[4747]: I1215 06:06:20.375597 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b\" (UID: \"cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b" Dec 15 06:06:20 crc kubenswrapper[4747]: I1215 06:06:20.376299 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b\" (UID: \"cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b" Dec 15 06:06:20 crc kubenswrapper[4747]: I1215 06:06:20.386521 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zq7g8\" (UniqueName: \"kubernetes.io/projected/cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4-kube-api-access-zq7g8\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b\" (UID: \"cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b" Dec 15 06:06:20 crc kubenswrapper[4747]: I1215 06:06:20.540427 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b" Dec 15 06:06:20 crc kubenswrapper[4747]: I1215 06:06:20.886943 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b"] Dec 15 06:06:21 crc kubenswrapper[4747]: I1215 06:06:21.166186 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b" event={"ID":"cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4","Type":"ContainerStarted","Data":"53c1bf4a5efe2ce031e9af223f42c05b40006c7dbcb5e117da5321e1f1f0c4ac"} Dec 15 06:06:22 crc kubenswrapper[4747]: I1215 06:06:22.179598 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b" event={"ID":"cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4","Type":"ContainerStarted","Data":"cae972b59d7211a4c8fb9b7334b257bea88f05f42a9b403f1f6e4043fb0864ff"} Dec 15 06:06:22 crc kubenswrapper[4747]: I1215 06:06:22.211603 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b" podStartSLOduration=1.578994707 podStartE2EDuration="2.211585163s" podCreationTimestamp="2025-12-15 06:06:20 +0000 UTC" firstStartedPulling="2025-12-15 06:06:20.897897431 +0000 UTC m=+1744.594409348" lastFinishedPulling="2025-12-15 06:06:21.530487887 +0000 UTC m=+1745.226999804" observedRunningTime="2025-12-15 06:06:22.198907186 +0000 UTC m=+1745.895419104" watchObservedRunningTime="2025-12-15 06:06:22.211585163 +0000 UTC m=+1745.908097081" Dec 15 06:06:25 crc kubenswrapper[4747]: I1215 06:06:25.629705 4747 scope.go:117] "RemoveContainer" containerID="1fb2c7cbf1bffaa65209c28e6e7abe2bf250e060aa38bf5208a3810b074d8610" Dec 15 06:06:25 crc kubenswrapper[4747]: E1215 06:06:25.630510 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:06:38 crc kubenswrapper[4747]: I1215 06:06:38.629858 4747 scope.go:117] "RemoveContainer" containerID="1fb2c7cbf1bffaa65209c28e6e7abe2bf250e060aa38bf5208a3810b074d8610" Dec 15 06:06:38 crc kubenswrapper[4747]: E1215 06:06:38.631806 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:06:50 crc kubenswrapper[4747]: I1215 06:06:50.629054 4747 scope.go:117] "RemoveContainer" containerID="1fb2c7cbf1bffaa65209c28e6e7abe2bf250e060aa38bf5208a3810b074d8610" Dec 15 06:06:50 crc kubenswrapper[4747]: E1215 06:06:50.630006 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:06:55 crc kubenswrapper[4747]: I1215 06:06:55.477530 4747 generic.go:334] "Generic (PLEG): container finished" podID="cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4" containerID="cae972b59d7211a4c8fb9b7334b257bea88f05f42a9b403f1f6e4043fb0864ff" exitCode=0 Dec 15 06:06:55 crc kubenswrapper[4747]: I1215 06:06:55.477621 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b" event={"ID":"cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4","Type":"ContainerDied","Data":"cae972b59d7211a4c8fb9b7334b257bea88f05f42a9b403f1f6e4043fb0864ff"} Dec 15 06:06:56 crc kubenswrapper[4747]: I1215 06:06:56.819747 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b" Dec 15 06:06:56 crc kubenswrapper[4747]: I1215 06:06:56.947168 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4-ssh-key\") pod \"cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4\" (UID: \"cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4\") " Dec 15 06:06:56 crc kubenswrapper[4747]: I1215 06:06:56.947234 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4-inventory\") pod \"cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4\" (UID: \"cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4\") " Dec 15 06:06:56 crc kubenswrapper[4747]: I1215 06:06:56.947309 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4-neutron-ovn-metadata-agent-neutron-config-0\") pod \"cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4\" (UID: \"cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4\") " Dec 15 06:06:56 crc kubenswrapper[4747]: I1215 06:06:56.947376 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4-neutron-metadata-combined-ca-bundle\") pod \"cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4\" (UID: \"cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4\") " Dec 15 06:06:56 crc kubenswrapper[4747]: I1215 06:06:56.947446 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4-nova-metadata-neutron-config-0\") pod \"cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4\" (UID: \"cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4\") " Dec 15 06:06:56 crc kubenswrapper[4747]: I1215 06:06:56.947484 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zq7g8\" (UniqueName: \"kubernetes.io/projected/cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4-kube-api-access-zq7g8\") pod \"cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4\" (UID: \"cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4\") " Dec 15 06:06:56 crc kubenswrapper[4747]: I1215 06:06:56.954946 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4" (UID: "cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:06:56 crc kubenswrapper[4747]: I1215 06:06:56.958803 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4-kube-api-access-zq7g8" (OuterVolumeSpecName: "kube-api-access-zq7g8") pod "cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4" (UID: "cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4"). InnerVolumeSpecName "kube-api-access-zq7g8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 06:06:56 crc kubenswrapper[4747]: I1215 06:06:56.976348 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4" (UID: "cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:06:56 crc kubenswrapper[4747]: I1215 06:06:56.976890 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4" (UID: "cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:06:56 crc kubenswrapper[4747]: I1215 06:06:56.977187 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4-inventory" (OuterVolumeSpecName: "inventory") pod "cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4" (UID: "cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:06:56 crc kubenswrapper[4747]: I1215 06:06:56.978536 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4" (UID: "cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:06:57 crc kubenswrapper[4747]: I1215 06:06:57.050051 4747 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 15 06:06:57 crc kubenswrapper[4747]: I1215 06:06:57.050096 4747 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 06:06:57 crc kubenswrapper[4747]: I1215 06:06:57.050115 4747 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 15 06:06:57 crc kubenswrapper[4747]: I1215 06:06:57.050131 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zq7g8\" (UniqueName: \"kubernetes.io/projected/cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4-kube-api-access-zq7g8\") on node \"crc\" DevicePath \"\"" Dec 15 06:06:57 crc kubenswrapper[4747]: I1215 06:06:57.050147 4747 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 15 06:06:57 crc kubenswrapper[4747]: I1215 06:06:57.050159 4747 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4-inventory\") on node \"crc\" DevicePath \"\"" Dec 15 06:06:57 crc kubenswrapper[4747]: I1215 06:06:57.498574 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b" event={"ID":"cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4","Type":"ContainerDied","Data":"53c1bf4a5efe2ce031e9af223f42c05b40006c7dbcb5e117da5321e1f1f0c4ac"} Dec 15 06:06:57 crc kubenswrapper[4747]: I1215 06:06:57.498882 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53c1bf4a5efe2ce031e9af223f42c05b40006c7dbcb5e117da5321e1f1f0c4ac" Dec 15 06:06:57 crc kubenswrapper[4747]: I1215 06:06:57.498984 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b" Dec 15 06:06:57 crc kubenswrapper[4747]: I1215 06:06:57.598581 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rdwlk"] Dec 15 06:06:57 crc kubenswrapper[4747]: E1215 06:06:57.599032 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 15 06:06:57 crc kubenswrapper[4747]: I1215 06:06:57.599051 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 15 06:06:57 crc kubenswrapper[4747]: I1215 06:06:57.599290 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 15 06:06:57 crc kubenswrapper[4747]: I1215 06:06:57.600084 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rdwlk" Dec 15 06:06:57 crc kubenswrapper[4747]: I1215 06:06:57.601825 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Dec 15 06:06:57 crc kubenswrapper[4747]: I1215 06:06:57.602136 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 15 06:06:57 crc kubenswrapper[4747]: I1215 06:06:57.602519 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 15 06:06:57 crc kubenswrapper[4747]: I1215 06:06:57.602833 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 15 06:06:57 crc kubenswrapper[4747]: I1215 06:06:57.603029 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bfv8q" Dec 15 06:06:57 crc kubenswrapper[4747]: I1215 06:06:57.607916 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rdwlk"] Dec 15 06:06:57 crc kubenswrapper[4747]: I1215 06:06:57.764066 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/56848ff3-1ce9-42b3-be44-5b8d4280c9a1-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-rdwlk\" (UID: \"56848ff3-1ce9-42b3-be44-5b8d4280c9a1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rdwlk" Dec 15 06:06:57 crc kubenswrapper[4747]: I1215 06:06:57.764244 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/56848ff3-1ce9-42b3-be44-5b8d4280c9a1-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-rdwlk\" (UID: \"56848ff3-1ce9-42b3-be44-5b8d4280c9a1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rdwlk" Dec 15 06:06:57 crc kubenswrapper[4747]: I1215 06:06:57.764303 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56848ff3-1ce9-42b3-be44-5b8d4280c9a1-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-rdwlk\" (UID: \"56848ff3-1ce9-42b3-be44-5b8d4280c9a1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rdwlk" Dec 15 06:06:57 crc kubenswrapper[4747]: I1215 06:06:57.764595 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqr5v\" (UniqueName: \"kubernetes.io/projected/56848ff3-1ce9-42b3-be44-5b8d4280c9a1-kube-api-access-qqr5v\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-rdwlk\" (UID: \"56848ff3-1ce9-42b3-be44-5b8d4280c9a1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rdwlk" Dec 15 06:06:57 crc kubenswrapper[4747]: I1215 06:06:57.764641 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56848ff3-1ce9-42b3-be44-5b8d4280c9a1-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-rdwlk\" (UID: \"56848ff3-1ce9-42b3-be44-5b8d4280c9a1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rdwlk" Dec 15 06:06:57 crc kubenswrapper[4747]: I1215 06:06:57.865808 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/56848ff3-1ce9-42b3-be44-5b8d4280c9a1-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-rdwlk\" (UID: \"56848ff3-1ce9-42b3-be44-5b8d4280c9a1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rdwlk" Dec 15 06:06:57 crc kubenswrapper[4747]: I1215 06:06:57.865904 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/56848ff3-1ce9-42b3-be44-5b8d4280c9a1-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-rdwlk\" (UID: \"56848ff3-1ce9-42b3-be44-5b8d4280c9a1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rdwlk" Dec 15 06:06:57 crc kubenswrapper[4747]: I1215 06:06:57.865953 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56848ff3-1ce9-42b3-be44-5b8d4280c9a1-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-rdwlk\" (UID: \"56848ff3-1ce9-42b3-be44-5b8d4280c9a1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rdwlk" Dec 15 06:06:57 crc kubenswrapper[4747]: I1215 06:06:57.866054 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqr5v\" (UniqueName: \"kubernetes.io/projected/56848ff3-1ce9-42b3-be44-5b8d4280c9a1-kube-api-access-qqr5v\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-rdwlk\" (UID: \"56848ff3-1ce9-42b3-be44-5b8d4280c9a1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rdwlk" Dec 15 06:06:57 crc kubenswrapper[4747]: I1215 06:06:57.866078 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56848ff3-1ce9-42b3-be44-5b8d4280c9a1-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-rdwlk\" (UID: \"56848ff3-1ce9-42b3-be44-5b8d4280c9a1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rdwlk" Dec 15 06:06:57 crc kubenswrapper[4747]: I1215 06:06:57.871047 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/56848ff3-1ce9-42b3-be44-5b8d4280c9a1-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-rdwlk\" (UID: \"56848ff3-1ce9-42b3-be44-5b8d4280c9a1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rdwlk" Dec 15 06:06:57 crc kubenswrapper[4747]: I1215 06:06:57.871719 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56848ff3-1ce9-42b3-be44-5b8d4280c9a1-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-rdwlk\" (UID: \"56848ff3-1ce9-42b3-be44-5b8d4280c9a1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rdwlk" Dec 15 06:06:57 crc kubenswrapper[4747]: I1215 06:06:57.871757 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/56848ff3-1ce9-42b3-be44-5b8d4280c9a1-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-rdwlk\" (UID: \"56848ff3-1ce9-42b3-be44-5b8d4280c9a1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rdwlk" Dec 15 06:06:57 crc kubenswrapper[4747]: I1215 06:06:57.871945 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56848ff3-1ce9-42b3-be44-5b8d4280c9a1-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-rdwlk\" (UID: \"56848ff3-1ce9-42b3-be44-5b8d4280c9a1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rdwlk" Dec 15 06:06:57 crc kubenswrapper[4747]: I1215 06:06:57.880114 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqr5v\" (UniqueName: \"kubernetes.io/projected/56848ff3-1ce9-42b3-be44-5b8d4280c9a1-kube-api-access-qqr5v\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-rdwlk\" (UID: \"56848ff3-1ce9-42b3-be44-5b8d4280c9a1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rdwlk" Dec 15 06:06:57 crc kubenswrapper[4747]: I1215 06:06:57.914185 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rdwlk" Dec 15 06:06:58 crc kubenswrapper[4747]: I1215 06:06:58.362419 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rdwlk"] Dec 15 06:06:58 crc kubenswrapper[4747]: I1215 06:06:58.508777 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rdwlk" event={"ID":"56848ff3-1ce9-42b3-be44-5b8d4280c9a1","Type":"ContainerStarted","Data":"4090594995af770dc7d37c6c97a16cdcb624e7c78a913bac556c9f70345f4a2c"} Dec 15 06:06:59 crc kubenswrapper[4747]: I1215 06:06:59.530060 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rdwlk" event={"ID":"56848ff3-1ce9-42b3-be44-5b8d4280c9a1","Type":"ContainerStarted","Data":"26cfa0cd6a00404feb754b30b234df68c905485e6d83bbfd60ed30ccb24bc407"} Dec 15 06:06:59 crc kubenswrapper[4747]: I1215 06:06:59.552741 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rdwlk" podStartSLOduration=1.944994463 podStartE2EDuration="2.552717384s" podCreationTimestamp="2025-12-15 06:06:57 +0000 UTC" firstStartedPulling="2025-12-15 06:06:58.375413654 +0000 UTC m=+1782.071925572" lastFinishedPulling="2025-12-15 06:06:58.983136577 +0000 UTC m=+1782.679648493" observedRunningTime="2025-12-15 06:06:59.54556135 +0000 UTC m=+1783.242073277" watchObservedRunningTime="2025-12-15 06:06:59.552717384 +0000 UTC m=+1783.249229301" Dec 15 06:07:05 crc kubenswrapper[4747]: I1215 06:07:05.630334 4747 scope.go:117] "RemoveContainer" containerID="1fb2c7cbf1bffaa65209c28e6e7abe2bf250e060aa38bf5208a3810b074d8610" Dec 15 06:07:05 crc kubenswrapper[4747]: E1215 06:07:05.631595 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:07:16 crc kubenswrapper[4747]: I1215 06:07:16.650640 4747 scope.go:117] "RemoveContainer" containerID="1fb2c7cbf1bffaa65209c28e6e7abe2bf250e060aa38bf5208a3810b074d8610" Dec 15 06:07:16 crc kubenswrapper[4747]: E1215 06:07:16.652405 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:07:30 crc kubenswrapper[4747]: I1215 06:07:30.631180 4747 scope.go:117] "RemoveContainer" containerID="1fb2c7cbf1bffaa65209c28e6e7abe2bf250e060aa38bf5208a3810b074d8610" Dec 15 06:07:30 crc kubenswrapper[4747]: E1215 06:07:30.632276 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:07:45 crc kubenswrapper[4747]: I1215 06:07:45.629695 4747 scope.go:117] "RemoveContainer" containerID="1fb2c7cbf1bffaa65209c28e6e7abe2bf250e060aa38bf5208a3810b074d8610" Dec 15 06:07:45 crc kubenswrapper[4747]: E1215 06:07:45.630816 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:07:57 crc kubenswrapper[4747]: I1215 06:07:57.579046 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q5gxm"] Dec 15 06:07:57 crc kubenswrapper[4747]: I1215 06:07:57.581540 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q5gxm" Dec 15 06:07:57 crc kubenswrapper[4747]: I1215 06:07:57.596496 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q5gxm"] Dec 15 06:07:57 crc kubenswrapper[4747]: I1215 06:07:57.628824 4747 scope.go:117] "RemoveContainer" containerID="1fb2c7cbf1bffaa65209c28e6e7abe2bf250e060aa38bf5208a3810b074d8610" Dec 15 06:07:57 crc kubenswrapper[4747]: E1215 06:07:57.629279 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:07:57 crc kubenswrapper[4747]: I1215 06:07:57.673496 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9r4h\" (UniqueName: \"kubernetes.io/projected/3beaf12d-e287-4842-8056-dfb1f34d3ae2-kube-api-access-f9r4h\") pod \"redhat-operators-q5gxm\" (UID: \"3beaf12d-e287-4842-8056-dfb1f34d3ae2\") " pod="openshift-marketplace/redhat-operators-q5gxm" Dec 15 06:07:57 crc kubenswrapper[4747]: I1215 06:07:57.673615 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3beaf12d-e287-4842-8056-dfb1f34d3ae2-utilities\") pod \"redhat-operators-q5gxm\" (UID: \"3beaf12d-e287-4842-8056-dfb1f34d3ae2\") " pod="openshift-marketplace/redhat-operators-q5gxm" Dec 15 06:07:57 crc kubenswrapper[4747]: I1215 06:07:57.673660 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3beaf12d-e287-4842-8056-dfb1f34d3ae2-catalog-content\") pod \"redhat-operators-q5gxm\" (UID: \"3beaf12d-e287-4842-8056-dfb1f34d3ae2\") " pod="openshift-marketplace/redhat-operators-q5gxm" Dec 15 06:07:57 crc kubenswrapper[4747]: I1215 06:07:57.775487 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9r4h\" (UniqueName: \"kubernetes.io/projected/3beaf12d-e287-4842-8056-dfb1f34d3ae2-kube-api-access-f9r4h\") pod \"redhat-operators-q5gxm\" (UID: \"3beaf12d-e287-4842-8056-dfb1f34d3ae2\") " pod="openshift-marketplace/redhat-operators-q5gxm" Dec 15 06:07:57 crc kubenswrapper[4747]: I1215 06:07:57.775593 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3beaf12d-e287-4842-8056-dfb1f34d3ae2-utilities\") pod \"redhat-operators-q5gxm\" (UID: \"3beaf12d-e287-4842-8056-dfb1f34d3ae2\") " pod="openshift-marketplace/redhat-operators-q5gxm" Dec 15 06:07:57 crc kubenswrapper[4747]: I1215 06:07:57.775626 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3beaf12d-e287-4842-8056-dfb1f34d3ae2-catalog-content\") pod \"redhat-operators-q5gxm\" (UID: \"3beaf12d-e287-4842-8056-dfb1f34d3ae2\") " pod="openshift-marketplace/redhat-operators-q5gxm" Dec 15 06:07:57 crc kubenswrapper[4747]: I1215 06:07:57.776097 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3beaf12d-e287-4842-8056-dfb1f34d3ae2-utilities\") pod \"redhat-operators-q5gxm\" (UID: \"3beaf12d-e287-4842-8056-dfb1f34d3ae2\") " pod="openshift-marketplace/redhat-operators-q5gxm" Dec 15 06:07:57 crc kubenswrapper[4747]: I1215 06:07:57.776157 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3beaf12d-e287-4842-8056-dfb1f34d3ae2-catalog-content\") pod \"redhat-operators-q5gxm\" (UID: \"3beaf12d-e287-4842-8056-dfb1f34d3ae2\") " pod="openshift-marketplace/redhat-operators-q5gxm" Dec 15 06:07:57 crc kubenswrapper[4747]: I1215 06:07:57.794320 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9r4h\" (UniqueName: \"kubernetes.io/projected/3beaf12d-e287-4842-8056-dfb1f34d3ae2-kube-api-access-f9r4h\") pod \"redhat-operators-q5gxm\" (UID: \"3beaf12d-e287-4842-8056-dfb1f34d3ae2\") " pod="openshift-marketplace/redhat-operators-q5gxm" Dec 15 06:07:57 crc kubenswrapper[4747]: I1215 06:07:57.903673 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q5gxm" Dec 15 06:07:58 crc kubenswrapper[4747]: I1215 06:07:58.296956 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q5gxm"] Dec 15 06:07:58 crc kubenswrapper[4747]: E1215 06:07:58.611621 4747 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3beaf12d_e287_4842_8056_dfb1f34d3ae2.slice/crio-conmon-34e7bc0f032ea5838a498f3872840368ef3940d79dbeffefaff3c8c89dcb3294.scope\": RecentStats: unable to find data in memory cache]" Dec 15 06:07:59 crc kubenswrapper[4747]: I1215 06:07:59.096578 4747 generic.go:334] "Generic (PLEG): container finished" podID="3beaf12d-e287-4842-8056-dfb1f34d3ae2" containerID="34e7bc0f032ea5838a498f3872840368ef3940d79dbeffefaff3c8c89dcb3294" exitCode=0 Dec 15 06:07:59 crc kubenswrapper[4747]: I1215 06:07:59.096652 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q5gxm" event={"ID":"3beaf12d-e287-4842-8056-dfb1f34d3ae2","Type":"ContainerDied","Data":"34e7bc0f032ea5838a498f3872840368ef3940d79dbeffefaff3c8c89dcb3294"} Dec 15 06:07:59 crc kubenswrapper[4747]: I1215 06:07:59.096696 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q5gxm" event={"ID":"3beaf12d-e287-4842-8056-dfb1f34d3ae2","Type":"ContainerStarted","Data":"51a7edec1a9457a895cbabb564486843bc50d98f37fb2d51d258cb7dc1ec2a4c"} Dec 15 06:07:59 crc kubenswrapper[4747]: I1215 06:07:59.099733 4747 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 15 06:08:00 crc kubenswrapper[4747]: I1215 06:08:00.108372 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q5gxm" event={"ID":"3beaf12d-e287-4842-8056-dfb1f34d3ae2","Type":"ContainerStarted","Data":"933efd701ea2b429408b8cca710bd8449a56ec34896e8b1fc51ca502101bf88c"} Dec 15 06:08:01 crc kubenswrapper[4747]: I1215 06:08:01.120654 4747 generic.go:334] "Generic (PLEG): container finished" podID="3beaf12d-e287-4842-8056-dfb1f34d3ae2" containerID="933efd701ea2b429408b8cca710bd8449a56ec34896e8b1fc51ca502101bf88c" exitCode=0 Dec 15 06:08:01 crc kubenswrapper[4747]: I1215 06:08:01.120715 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q5gxm" event={"ID":"3beaf12d-e287-4842-8056-dfb1f34d3ae2","Type":"ContainerDied","Data":"933efd701ea2b429408b8cca710bd8449a56ec34896e8b1fc51ca502101bf88c"} Dec 15 06:08:02 crc kubenswrapper[4747]: I1215 06:08:02.133687 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q5gxm" event={"ID":"3beaf12d-e287-4842-8056-dfb1f34d3ae2","Type":"ContainerStarted","Data":"da9a1b26d515e438c6ff88f847e2975a7ebeff651535375025a3199e436f330f"} Dec 15 06:08:02 crc kubenswrapper[4747]: I1215 06:08:02.165319 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q5gxm" podStartSLOduration=2.563418855 podStartE2EDuration="5.16530052s" podCreationTimestamp="2025-12-15 06:07:57 +0000 UTC" firstStartedPulling="2025-12-15 06:07:59.0994391 +0000 UTC m=+1842.795951018" lastFinishedPulling="2025-12-15 06:08:01.701320766 +0000 UTC m=+1845.397832683" observedRunningTime="2025-12-15 06:08:02.155883213 +0000 UTC m=+1845.852395131" watchObservedRunningTime="2025-12-15 06:08:02.16530052 +0000 UTC m=+1845.861812438" Dec 15 06:08:07 crc kubenswrapper[4747]: I1215 06:08:07.904498 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q5gxm" Dec 15 06:08:07 crc kubenswrapper[4747]: I1215 06:08:07.905188 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q5gxm" Dec 15 06:08:07 crc kubenswrapper[4747]: I1215 06:08:07.942734 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q5gxm" Dec 15 06:08:08 crc kubenswrapper[4747]: I1215 06:08:08.226905 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q5gxm" Dec 15 06:08:08 crc kubenswrapper[4747]: I1215 06:08:08.275590 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q5gxm"] Dec 15 06:08:08 crc kubenswrapper[4747]: I1215 06:08:08.630829 4747 scope.go:117] "RemoveContainer" containerID="1fb2c7cbf1bffaa65209c28e6e7abe2bf250e060aa38bf5208a3810b074d8610" Dec 15 06:08:08 crc kubenswrapper[4747]: E1215 06:08:08.631220 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:08:10 crc kubenswrapper[4747]: I1215 06:08:10.203763 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-q5gxm" podUID="3beaf12d-e287-4842-8056-dfb1f34d3ae2" containerName="registry-server" containerID="cri-o://da9a1b26d515e438c6ff88f847e2975a7ebeff651535375025a3199e436f330f" gracePeriod=2 Dec 15 06:08:10 crc kubenswrapper[4747]: I1215 06:08:10.592549 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q5gxm" Dec 15 06:08:10 crc kubenswrapper[4747]: I1215 06:08:10.748492 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3beaf12d-e287-4842-8056-dfb1f34d3ae2-catalog-content\") pod \"3beaf12d-e287-4842-8056-dfb1f34d3ae2\" (UID: \"3beaf12d-e287-4842-8056-dfb1f34d3ae2\") " Dec 15 06:08:10 crc kubenswrapper[4747]: I1215 06:08:10.748684 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9r4h\" (UniqueName: \"kubernetes.io/projected/3beaf12d-e287-4842-8056-dfb1f34d3ae2-kube-api-access-f9r4h\") pod \"3beaf12d-e287-4842-8056-dfb1f34d3ae2\" (UID: \"3beaf12d-e287-4842-8056-dfb1f34d3ae2\") " Dec 15 06:08:10 crc kubenswrapper[4747]: I1215 06:08:10.748783 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3beaf12d-e287-4842-8056-dfb1f34d3ae2-utilities\") pod \"3beaf12d-e287-4842-8056-dfb1f34d3ae2\" (UID: \"3beaf12d-e287-4842-8056-dfb1f34d3ae2\") " Dec 15 06:08:10 crc kubenswrapper[4747]: I1215 06:08:10.749570 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3beaf12d-e287-4842-8056-dfb1f34d3ae2-utilities" (OuterVolumeSpecName: "utilities") pod "3beaf12d-e287-4842-8056-dfb1f34d3ae2" (UID: "3beaf12d-e287-4842-8056-dfb1f34d3ae2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 06:08:10 crc kubenswrapper[4747]: I1215 06:08:10.754863 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3beaf12d-e287-4842-8056-dfb1f34d3ae2-kube-api-access-f9r4h" (OuterVolumeSpecName: "kube-api-access-f9r4h") pod "3beaf12d-e287-4842-8056-dfb1f34d3ae2" (UID: "3beaf12d-e287-4842-8056-dfb1f34d3ae2"). InnerVolumeSpecName "kube-api-access-f9r4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 06:08:10 crc kubenswrapper[4747]: I1215 06:08:10.839317 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3beaf12d-e287-4842-8056-dfb1f34d3ae2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3beaf12d-e287-4842-8056-dfb1f34d3ae2" (UID: "3beaf12d-e287-4842-8056-dfb1f34d3ae2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 06:08:10 crc kubenswrapper[4747]: I1215 06:08:10.851978 4747 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3beaf12d-e287-4842-8056-dfb1f34d3ae2-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 15 06:08:10 crc kubenswrapper[4747]: I1215 06:08:10.852020 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9r4h\" (UniqueName: \"kubernetes.io/projected/3beaf12d-e287-4842-8056-dfb1f34d3ae2-kube-api-access-f9r4h\") on node \"crc\" DevicePath \"\"" Dec 15 06:08:10 crc kubenswrapper[4747]: I1215 06:08:10.852036 4747 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3beaf12d-e287-4842-8056-dfb1f34d3ae2-utilities\") on node \"crc\" DevicePath \"\"" Dec 15 06:08:11 crc kubenswrapper[4747]: I1215 06:08:11.216799 4747 generic.go:334] "Generic (PLEG): container finished" podID="3beaf12d-e287-4842-8056-dfb1f34d3ae2" containerID="da9a1b26d515e438c6ff88f847e2975a7ebeff651535375025a3199e436f330f" exitCode=0 Dec 15 06:08:11 crc kubenswrapper[4747]: I1215 06:08:11.216855 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q5gxm" event={"ID":"3beaf12d-e287-4842-8056-dfb1f34d3ae2","Type":"ContainerDied","Data":"da9a1b26d515e438c6ff88f847e2975a7ebeff651535375025a3199e436f330f"} Dec 15 06:08:11 crc kubenswrapper[4747]: I1215 06:08:11.216891 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q5gxm" event={"ID":"3beaf12d-e287-4842-8056-dfb1f34d3ae2","Type":"ContainerDied","Data":"51a7edec1a9457a895cbabb564486843bc50d98f37fb2d51d258cb7dc1ec2a4c"} Dec 15 06:08:11 crc kubenswrapper[4747]: I1215 06:08:11.216912 4747 scope.go:117] "RemoveContainer" containerID="da9a1b26d515e438c6ff88f847e2975a7ebeff651535375025a3199e436f330f" Dec 15 06:08:11 crc kubenswrapper[4747]: I1215 06:08:11.216910 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q5gxm" Dec 15 06:08:11 crc kubenswrapper[4747]: I1215 06:08:11.236142 4747 scope.go:117] "RemoveContainer" containerID="933efd701ea2b429408b8cca710bd8449a56ec34896e8b1fc51ca502101bf88c" Dec 15 06:08:11 crc kubenswrapper[4747]: I1215 06:08:11.251966 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q5gxm"] Dec 15 06:08:11 crc kubenswrapper[4747]: I1215 06:08:11.260673 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-q5gxm"] Dec 15 06:08:11 crc kubenswrapper[4747]: I1215 06:08:11.274327 4747 scope.go:117] "RemoveContainer" containerID="34e7bc0f032ea5838a498f3872840368ef3940d79dbeffefaff3c8c89dcb3294" Dec 15 06:08:11 crc kubenswrapper[4747]: I1215 06:08:11.291896 4747 scope.go:117] "RemoveContainer" containerID="da9a1b26d515e438c6ff88f847e2975a7ebeff651535375025a3199e436f330f" Dec 15 06:08:11 crc kubenswrapper[4747]: E1215 06:08:11.292366 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da9a1b26d515e438c6ff88f847e2975a7ebeff651535375025a3199e436f330f\": container with ID starting with da9a1b26d515e438c6ff88f847e2975a7ebeff651535375025a3199e436f330f not found: ID does not exist" containerID="da9a1b26d515e438c6ff88f847e2975a7ebeff651535375025a3199e436f330f" Dec 15 06:08:11 crc kubenswrapper[4747]: I1215 06:08:11.292417 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da9a1b26d515e438c6ff88f847e2975a7ebeff651535375025a3199e436f330f"} err="failed to get container status \"da9a1b26d515e438c6ff88f847e2975a7ebeff651535375025a3199e436f330f\": rpc error: code = NotFound desc = could not find container \"da9a1b26d515e438c6ff88f847e2975a7ebeff651535375025a3199e436f330f\": container with ID starting with da9a1b26d515e438c6ff88f847e2975a7ebeff651535375025a3199e436f330f not found: ID does not exist" Dec 15 06:08:11 crc kubenswrapper[4747]: I1215 06:08:11.292450 4747 scope.go:117] "RemoveContainer" containerID="933efd701ea2b429408b8cca710bd8449a56ec34896e8b1fc51ca502101bf88c" Dec 15 06:08:11 crc kubenswrapper[4747]: E1215 06:08:11.292781 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"933efd701ea2b429408b8cca710bd8449a56ec34896e8b1fc51ca502101bf88c\": container with ID starting with 933efd701ea2b429408b8cca710bd8449a56ec34896e8b1fc51ca502101bf88c not found: ID does not exist" containerID="933efd701ea2b429408b8cca710bd8449a56ec34896e8b1fc51ca502101bf88c" Dec 15 06:08:11 crc kubenswrapper[4747]: I1215 06:08:11.292815 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"933efd701ea2b429408b8cca710bd8449a56ec34896e8b1fc51ca502101bf88c"} err="failed to get container status \"933efd701ea2b429408b8cca710bd8449a56ec34896e8b1fc51ca502101bf88c\": rpc error: code = NotFound desc = could not find container \"933efd701ea2b429408b8cca710bd8449a56ec34896e8b1fc51ca502101bf88c\": container with ID starting with 933efd701ea2b429408b8cca710bd8449a56ec34896e8b1fc51ca502101bf88c not found: ID does not exist" Dec 15 06:08:11 crc kubenswrapper[4747]: I1215 06:08:11.292840 4747 scope.go:117] "RemoveContainer" containerID="34e7bc0f032ea5838a498f3872840368ef3940d79dbeffefaff3c8c89dcb3294" Dec 15 06:08:11 crc kubenswrapper[4747]: E1215 06:08:11.293099 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34e7bc0f032ea5838a498f3872840368ef3940d79dbeffefaff3c8c89dcb3294\": container with ID starting with 34e7bc0f032ea5838a498f3872840368ef3940d79dbeffefaff3c8c89dcb3294 not found: ID does not exist" containerID="34e7bc0f032ea5838a498f3872840368ef3940d79dbeffefaff3c8c89dcb3294" Dec 15 06:08:11 crc kubenswrapper[4747]: I1215 06:08:11.293124 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34e7bc0f032ea5838a498f3872840368ef3940d79dbeffefaff3c8c89dcb3294"} err="failed to get container status \"34e7bc0f032ea5838a498f3872840368ef3940d79dbeffefaff3c8c89dcb3294\": rpc error: code = NotFound desc = could not find container \"34e7bc0f032ea5838a498f3872840368ef3940d79dbeffefaff3c8c89dcb3294\": container with ID starting with 34e7bc0f032ea5838a498f3872840368ef3940d79dbeffefaff3c8c89dcb3294 not found: ID does not exist" Dec 15 06:08:12 crc kubenswrapper[4747]: I1215 06:08:12.640442 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3beaf12d-e287-4842-8056-dfb1f34d3ae2" path="/var/lib/kubelet/pods/3beaf12d-e287-4842-8056-dfb1f34d3ae2/volumes" Dec 15 06:08:23 crc kubenswrapper[4747]: I1215 06:08:23.630539 4747 scope.go:117] "RemoveContainer" containerID="1fb2c7cbf1bffaa65209c28e6e7abe2bf250e060aa38bf5208a3810b074d8610" Dec 15 06:08:23 crc kubenswrapper[4747]: E1215 06:08:23.631821 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:08:37 crc kubenswrapper[4747]: I1215 06:08:37.630040 4747 scope.go:117] "RemoveContainer" containerID="1fb2c7cbf1bffaa65209c28e6e7abe2bf250e060aa38bf5208a3810b074d8610" Dec 15 06:08:37 crc kubenswrapper[4747]: E1215 06:08:37.632406 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:08:49 crc kubenswrapper[4747]: I1215 06:08:49.630878 4747 scope.go:117] "RemoveContainer" containerID="1fb2c7cbf1bffaa65209c28e6e7abe2bf250e060aa38bf5208a3810b074d8610" Dec 15 06:08:49 crc kubenswrapper[4747]: E1215 06:08:49.631865 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:09:00 crc kubenswrapper[4747]: I1215 06:09:00.629486 4747 scope.go:117] "RemoveContainer" containerID="1fb2c7cbf1bffaa65209c28e6e7abe2bf250e060aa38bf5208a3810b074d8610" Dec 15 06:09:00 crc kubenswrapper[4747]: E1215 06:09:00.630369 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:09:11 crc kubenswrapper[4747]: I1215 06:09:11.629167 4747 scope.go:117] "RemoveContainer" containerID="1fb2c7cbf1bffaa65209c28e6e7abe2bf250e060aa38bf5208a3810b074d8610" Dec 15 06:09:11 crc kubenswrapper[4747]: E1215 06:09:11.630059 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:09:25 crc kubenswrapper[4747]: I1215 06:09:25.629548 4747 scope.go:117] "RemoveContainer" containerID="1fb2c7cbf1bffaa65209c28e6e7abe2bf250e060aa38bf5208a3810b074d8610" Dec 15 06:09:25 crc kubenswrapper[4747]: E1215 06:09:25.630484 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:09:38 crc kubenswrapper[4747]: I1215 06:09:38.630114 4747 scope.go:117] "RemoveContainer" containerID="1fb2c7cbf1bffaa65209c28e6e7abe2bf250e060aa38bf5208a3810b074d8610" Dec 15 06:09:38 crc kubenswrapper[4747]: E1215 06:09:38.631052 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:09:52 crc kubenswrapper[4747]: I1215 06:09:52.150130 4747 generic.go:334] "Generic (PLEG): container finished" podID="56848ff3-1ce9-42b3-be44-5b8d4280c9a1" containerID="26cfa0cd6a00404feb754b30b234df68c905485e6d83bbfd60ed30ccb24bc407" exitCode=0 Dec 15 06:09:52 crc kubenswrapper[4747]: I1215 06:09:52.150217 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rdwlk" event={"ID":"56848ff3-1ce9-42b3-be44-5b8d4280c9a1","Type":"ContainerDied","Data":"26cfa0cd6a00404feb754b30b234df68c905485e6d83bbfd60ed30ccb24bc407"} Dec 15 06:09:52 crc kubenswrapper[4747]: I1215 06:09:52.630273 4747 scope.go:117] "RemoveContainer" containerID="1fb2c7cbf1bffaa65209c28e6e7abe2bf250e060aa38bf5208a3810b074d8610" Dec 15 06:09:52 crc kubenswrapper[4747]: E1215 06:09:52.630666 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:09:53 crc kubenswrapper[4747]: I1215 06:09:53.528435 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rdwlk" Dec 15 06:09:53 crc kubenswrapper[4747]: I1215 06:09:53.652109 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/56848ff3-1ce9-42b3-be44-5b8d4280c9a1-libvirt-secret-0\") pod \"56848ff3-1ce9-42b3-be44-5b8d4280c9a1\" (UID: \"56848ff3-1ce9-42b3-be44-5b8d4280c9a1\") " Dec 15 06:09:53 crc kubenswrapper[4747]: I1215 06:09:53.652225 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/56848ff3-1ce9-42b3-be44-5b8d4280c9a1-ssh-key\") pod \"56848ff3-1ce9-42b3-be44-5b8d4280c9a1\" (UID: \"56848ff3-1ce9-42b3-be44-5b8d4280c9a1\") " Dec 15 06:09:53 crc kubenswrapper[4747]: I1215 06:09:53.652258 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56848ff3-1ce9-42b3-be44-5b8d4280c9a1-libvirt-combined-ca-bundle\") pod \"56848ff3-1ce9-42b3-be44-5b8d4280c9a1\" (UID: \"56848ff3-1ce9-42b3-be44-5b8d4280c9a1\") " Dec 15 06:09:53 crc kubenswrapper[4747]: I1215 06:09:53.652340 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqr5v\" (UniqueName: \"kubernetes.io/projected/56848ff3-1ce9-42b3-be44-5b8d4280c9a1-kube-api-access-qqr5v\") pod \"56848ff3-1ce9-42b3-be44-5b8d4280c9a1\" (UID: \"56848ff3-1ce9-42b3-be44-5b8d4280c9a1\") " Dec 15 06:09:53 crc kubenswrapper[4747]: I1215 06:09:53.652453 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56848ff3-1ce9-42b3-be44-5b8d4280c9a1-inventory\") pod \"56848ff3-1ce9-42b3-be44-5b8d4280c9a1\" (UID: \"56848ff3-1ce9-42b3-be44-5b8d4280c9a1\") " Dec 15 06:09:53 crc kubenswrapper[4747]: I1215 06:09:53.659464 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56848ff3-1ce9-42b3-be44-5b8d4280c9a1-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "56848ff3-1ce9-42b3-be44-5b8d4280c9a1" (UID: "56848ff3-1ce9-42b3-be44-5b8d4280c9a1"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:09:53 crc kubenswrapper[4747]: I1215 06:09:53.660087 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56848ff3-1ce9-42b3-be44-5b8d4280c9a1-kube-api-access-qqr5v" (OuterVolumeSpecName: "kube-api-access-qqr5v") pod "56848ff3-1ce9-42b3-be44-5b8d4280c9a1" (UID: "56848ff3-1ce9-42b3-be44-5b8d4280c9a1"). InnerVolumeSpecName "kube-api-access-qqr5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 06:09:53 crc kubenswrapper[4747]: I1215 06:09:53.678693 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56848ff3-1ce9-42b3-be44-5b8d4280c9a1-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "56848ff3-1ce9-42b3-be44-5b8d4280c9a1" (UID: "56848ff3-1ce9-42b3-be44-5b8d4280c9a1"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:09:53 crc kubenswrapper[4747]: I1215 06:09:53.679340 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56848ff3-1ce9-42b3-be44-5b8d4280c9a1-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "56848ff3-1ce9-42b3-be44-5b8d4280c9a1" (UID: "56848ff3-1ce9-42b3-be44-5b8d4280c9a1"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:09:53 crc kubenswrapper[4747]: I1215 06:09:53.679786 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56848ff3-1ce9-42b3-be44-5b8d4280c9a1-inventory" (OuterVolumeSpecName: "inventory") pod "56848ff3-1ce9-42b3-be44-5b8d4280c9a1" (UID: "56848ff3-1ce9-42b3-be44-5b8d4280c9a1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:09:53 crc kubenswrapper[4747]: I1215 06:09:53.756344 4747 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56848ff3-1ce9-42b3-be44-5b8d4280c9a1-inventory\") on node \"crc\" DevicePath \"\"" Dec 15 06:09:53 crc kubenswrapper[4747]: I1215 06:09:53.757063 4747 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/56848ff3-1ce9-42b3-be44-5b8d4280c9a1-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Dec 15 06:09:53 crc kubenswrapper[4747]: I1215 06:09:53.757169 4747 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/56848ff3-1ce9-42b3-be44-5b8d4280c9a1-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 15 06:09:53 crc kubenswrapper[4747]: I1215 06:09:53.757231 4747 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56848ff3-1ce9-42b3-be44-5b8d4280c9a1-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 06:09:53 crc kubenswrapper[4747]: I1215 06:09:53.757298 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqr5v\" (UniqueName: \"kubernetes.io/projected/56848ff3-1ce9-42b3-be44-5b8d4280c9a1-kube-api-access-qqr5v\") on node \"crc\" DevicePath \"\"" Dec 15 06:09:54 crc kubenswrapper[4747]: I1215 06:09:54.169620 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rdwlk" event={"ID":"56848ff3-1ce9-42b3-be44-5b8d4280c9a1","Type":"ContainerDied","Data":"4090594995af770dc7d37c6c97a16cdcb624e7c78a913bac556c9f70345f4a2c"} Dec 15 06:09:54 crc kubenswrapper[4747]: I1215 06:09:54.169667 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rdwlk" Dec 15 06:09:54 crc kubenswrapper[4747]: I1215 06:09:54.169671 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4090594995af770dc7d37c6c97a16cdcb624e7c78a913bac556c9f70345f4a2c" Dec 15 06:09:54 crc kubenswrapper[4747]: I1215 06:09:54.239513 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-v7cvg"] Dec 15 06:09:54 crc kubenswrapper[4747]: E1215 06:09:54.240024 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3beaf12d-e287-4842-8056-dfb1f34d3ae2" containerName="extract-content" Dec 15 06:09:54 crc kubenswrapper[4747]: I1215 06:09:54.240045 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="3beaf12d-e287-4842-8056-dfb1f34d3ae2" containerName="extract-content" Dec 15 06:09:54 crc kubenswrapper[4747]: E1215 06:09:54.240090 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56848ff3-1ce9-42b3-be44-5b8d4280c9a1" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 15 06:09:54 crc kubenswrapper[4747]: I1215 06:09:54.240097 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="56848ff3-1ce9-42b3-be44-5b8d4280c9a1" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 15 06:09:54 crc kubenswrapper[4747]: E1215 06:09:54.240109 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3beaf12d-e287-4842-8056-dfb1f34d3ae2" containerName="extract-utilities" Dec 15 06:09:54 crc kubenswrapper[4747]: I1215 06:09:54.240115 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="3beaf12d-e287-4842-8056-dfb1f34d3ae2" containerName="extract-utilities" Dec 15 06:09:54 crc kubenswrapper[4747]: E1215 06:09:54.240133 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3beaf12d-e287-4842-8056-dfb1f34d3ae2" containerName="registry-server" Dec 15 06:09:54 crc kubenswrapper[4747]: I1215 06:09:54.240138 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="3beaf12d-e287-4842-8056-dfb1f34d3ae2" containerName="registry-server" Dec 15 06:09:54 crc kubenswrapper[4747]: I1215 06:09:54.240336 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="56848ff3-1ce9-42b3-be44-5b8d4280c9a1" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 15 06:09:54 crc kubenswrapper[4747]: I1215 06:09:54.240349 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="3beaf12d-e287-4842-8056-dfb1f34d3ae2" containerName="registry-server" Dec 15 06:09:54 crc kubenswrapper[4747]: I1215 06:09:54.241109 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v7cvg" Dec 15 06:09:54 crc kubenswrapper[4747]: I1215 06:09:54.247615 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 15 06:09:54 crc kubenswrapper[4747]: I1215 06:09:54.247968 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Dec 15 06:09:54 crc kubenswrapper[4747]: I1215 06:09:54.248116 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Dec 15 06:09:54 crc kubenswrapper[4747]: I1215 06:09:54.248749 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 15 06:09:54 crc kubenswrapper[4747]: I1215 06:09:54.248908 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 15 06:09:54 crc kubenswrapper[4747]: I1215 06:09:54.249263 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Dec 15 06:09:54 crc kubenswrapper[4747]: I1215 06:09:54.249409 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bfv8q" Dec 15 06:09:54 crc kubenswrapper[4747]: I1215 06:09:54.250345 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-v7cvg"] Dec 15 06:09:54 crc kubenswrapper[4747]: I1215 06:09:54.263649 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6a04d0c3-49fa-44ad-ab27-08ba583d1142-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v7cvg\" (UID: \"6a04d0c3-49fa-44ad-ab27-08ba583d1142\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v7cvg" Dec 15 06:09:54 crc kubenswrapper[4747]: I1215 06:09:54.263685 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6a04d0c3-49fa-44ad-ab27-08ba583d1142-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v7cvg\" (UID: \"6a04d0c3-49fa-44ad-ab27-08ba583d1142\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v7cvg" Dec 15 06:09:54 crc kubenswrapper[4747]: I1215 06:09:54.263829 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6a04d0c3-49fa-44ad-ab27-08ba583d1142-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v7cvg\" (UID: \"6a04d0c3-49fa-44ad-ab27-08ba583d1142\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v7cvg" Dec 15 06:09:54 crc kubenswrapper[4747]: I1215 06:09:54.263871 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a04d0c3-49fa-44ad-ab27-08ba583d1142-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v7cvg\" (UID: \"6a04d0c3-49fa-44ad-ab27-08ba583d1142\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v7cvg" Dec 15 06:09:54 crc kubenswrapper[4747]: I1215 06:09:54.263985 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjhpq\" (UniqueName: \"kubernetes.io/projected/6a04d0c3-49fa-44ad-ab27-08ba583d1142-kube-api-access-fjhpq\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v7cvg\" (UID: \"6a04d0c3-49fa-44ad-ab27-08ba583d1142\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v7cvg" Dec 15 06:09:54 crc kubenswrapper[4747]: I1215 06:09:54.264149 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a04d0c3-49fa-44ad-ab27-08ba583d1142-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v7cvg\" (UID: \"6a04d0c3-49fa-44ad-ab27-08ba583d1142\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v7cvg" Dec 15 06:09:54 crc kubenswrapper[4747]: I1215 06:09:54.264313 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6a04d0c3-49fa-44ad-ab27-08ba583d1142-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v7cvg\" (UID: \"6a04d0c3-49fa-44ad-ab27-08ba583d1142\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v7cvg" Dec 15 06:09:54 crc kubenswrapper[4747]: I1215 06:09:54.264415 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6a04d0c3-49fa-44ad-ab27-08ba583d1142-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v7cvg\" (UID: \"6a04d0c3-49fa-44ad-ab27-08ba583d1142\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v7cvg" Dec 15 06:09:54 crc kubenswrapper[4747]: I1215 06:09:54.264474 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6a04d0c3-49fa-44ad-ab27-08ba583d1142-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v7cvg\" (UID: \"6a04d0c3-49fa-44ad-ab27-08ba583d1142\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v7cvg" Dec 15 06:09:54 crc kubenswrapper[4747]: I1215 06:09:54.366754 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6a04d0c3-49fa-44ad-ab27-08ba583d1142-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v7cvg\" (UID: \"6a04d0c3-49fa-44ad-ab27-08ba583d1142\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v7cvg" Dec 15 06:09:54 crc kubenswrapper[4747]: I1215 06:09:54.366854 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6a04d0c3-49fa-44ad-ab27-08ba583d1142-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v7cvg\" (UID: \"6a04d0c3-49fa-44ad-ab27-08ba583d1142\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v7cvg" Dec 15 06:09:54 crc kubenswrapper[4747]: I1215 06:09:54.366901 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6a04d0c3-49fa-44ad-ab27-08ba583d1142-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v7cvg\" (UID: \"6a04d0c3-49fa-44ad-ab27-08ba583d1142\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v7cvg" Dec 15 06:09:54 crc kubenswrapper[4747]: I1215 06:09:54.367048 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6a04d0c3-49fa-44ad-ab27-08ba583d1142-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v7cvg\" (UID: \"6a04d0c3-49fa-44ad-ab27-08ba583d1142\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v7cvg" Dec 15 06:09:54 crc kubenswrapper[4747]: I1215 06:09:54.367087 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6a04d0c3-49fa-44ad-ab27-08ba583d1142-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v7cvg\" (UID: \"6a04d0c3-49fa-44ad-ab27-08ba583d1142\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v7cvg" Dec 15 06:09:54 crc kubenswrapper[4747]: I1215 06:09:54.367169 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6a04d0c3-49fa-44ad-ab27-08ba583d1142-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v7cvg\" (UID: \"6a04d0c3-49fa-44ad-ab27-08ba583d1142\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v7cvg" Dec 15 06:09:54 crc kubenswrapper[4747]: I1215 06:09:54.367188 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a04d0c3-49fa-44ad-ab27-08ba583d1142-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v7cvg\" (UID: \"6a04d0c3-49fa-44ad-ab27-08ba583d1142\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v7cvg" Dec 15 06:09:54 crc kubenswrapper[4747]: I1215 06:09:54.367260 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjhpq\" (UniqueName: \"kubernetes.io/projected/6a04d0c3-49fa-44ad-ab27-08ba583d1142-kube-api-access-fjhpq\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v7cvg\" (UID: \"6a04d0c3-49fa-44ad-ab27-08ba583d1142\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v7cvg" Dec 15 06:09:54 crc kubenswrapper[4747]: I1215 06:09:54.367374 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a04d0c3-49fa-44ad-ab27-08ba583d1142-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v7cvg\" (UID: \"6a04d0c3-49fa-44ad-ab27-08ba583d1142\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v7cvg" Dec 15 06:09:54 crc kubenswrapper[4747]: I1215 06:09:54.369121 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6a04d0c3-49fa-44ad-ab27-08ba583d1142-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v7cvg\" (UID: \"6a04d0c3-49fa-44ad-ab27-08ba583d1142\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v7cvg" Dec 15 06:09:54 crc kubenswrapper[4747]: I1215 06:09:54.371008 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6a04d0c3-49fa-44ad-ab27-08ba583d1142-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v7cvg\" (UID: \"6a04d0c3-49fa-44ad-ab27-08ba583d1142\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v7cvg" Dec 15 06:09:54 crc kubenswrapper[4747]: I1215 06:09:54.371167 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a04d0c3-49fa-44ad-ab27-08ba583d1142-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v7cvg\" (UID: \"6a04d0c3-49fa-44ad-ab27-08ba583d1142\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v7cvg" Dec 15 06:09:54 crc kubenswrapper[4747]: I1215 06:09:54.372046 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6a04d0c3-49fa-44ad-ab27-08ba583d1142-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v7cvg\" (UID: \"6a04d0c3-49fa-44ad-ab27-08ba583d1142\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v7cvg" Dec 15 06:09:54 crc kubenswrapper[4747]: I1215 06:09:54.372523 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6a04d0c3-49fa-44ad-ab27-08ba583d1142-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v7cvg\" (UID: \"6a04d0c3-49fa-44ad-ab27-08ba583d1142\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v7cvg" Dec 15 06:09:54 crc kubenswrapper[4747]: I1215 06:09:54.372703 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a04d0c3-49fa-44ad-ab27-08ba583d1142-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v7cvg\" (UID: \"6a04d0c3-49fa-44ad-ab27-08ba583d1142\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v7cvg" Dec 15 06:09:54 crc kubenswrapper[4747]: I1215 06:09:54.372809 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6a04d0c3-49fa-44ad-ab27-08ba583d1142-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v7cvg\" (UID: \"6a04d0c3-49fa-44ad-ab27-08ba583d1142\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v7cvg" Dec 15 06:09:54 crc kubenswrapper[4747]: I1215 06:09:54.373288 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6a04d0c3-49fa-44ad-ab27-08ba583d1142-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v7cvg\" (UID: \"6a04d0c3-49fa-44ad-ab27-08ba583d1142\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v7cvg" Dec 15 06:09:54 crc kubenswrapper[4747]: I1215 06:09:54.382731 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjhpq\" (UniqueName: \"kubernetes.io/projected/6a04d0c3-49fa-44ad-ab27-08ba583d1142-kube-api-access-fjhpq\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v7cvg\" (UID: \"6a04d0c3-49fa-44ad-ab27-08ba583d1142\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v7cvg" Dec 15 06:09:54 crc kubenswrapper[4747]: I1215 06:09:54.563718 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v7cvg" Dec 15 06:09:55 crc kubenswrapper[4747]: I1215 06:09:55.046112 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-v7cvg"] Dec 15 06:09:55 crc kubenswrapper[4747]: I1215 06:09:55.196609 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v7cvg" event={"ID":"6a04d0c3-49fa-44ad-ab27-08ba583d1142","Type":"ContainerStarted","Data":"de00c59671d0b793ef9bf8f86b024b468630a261490ad4d2dd172906a0f3bd9a"} Dec 15 06:09:56 crc kubenswrapper[4747]: I1215 06:09:56.209159 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v7cvg" event={"ID":"6a04d0c3-49fa-44ad-ab27-08ba583d1142","Type":"ContainerStarted","Data":"f085c8e23d65a40ecc89fb1cbfae9e848d673857be084beea03a3c21375e8038"} Dec 15 06:09:56 crc kubenswrapper[4747]: I1215 06:09:56.225904 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v7cvg" podStartSLOduration=1.5692407259999999 podStartE2EDuration="2.225887116s" podCreationTimestamp="2025-12-15 06:09:54 +0000 UTC" firstStartedPulling="2025-12-15 06:09:55.062393119 +0000 UTC m=+1958.758905035" lastFinishedPulling="2025-12-15 06:09:55.719039508 +0000 UTC m=+1959.415551425" observedRunningTime="2025-12-15 06:09:56.222160078 +0000 UTC m=+1959.918671995" watchObservedRunningTime="2025-12-15 06:09:56.225887116 +0000 UTC m=+1959.922399033" Dec 15 06:10:03 crc kubenswrapper[4747]: I1215 06:10:03.630362 4747 scope.go:117] "RemoveContainer" containerID="1fb2c7cbf1bffaa65209c28e6e7abe2bf250e060aa38bf5208a3810b074d8610" Dec 15 06:10:03 crc kubenswrapper[4747]: E1215 06:10:03.631530 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:10:18 crc kubenswrapper[4747]: I1215 06:10:18.629406 4747 scope.go:117] "RemoveContainer" containerID="1fb2c7cbf1bffaa65209c28e6e7abe2bf250e060aa38bf5208a3810b074d8610" Dec 15 06:10:18 crc kubenswrapper[4747]: E1215 06:10:18.630313 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:10:33 crc kubenswrapper[4747]: I1215 06:10:33.629542 4747 scope.go:117] "RemoveContainer" containerID="1fb2c7cbf1bffaa65209c28e6e7abe2bf250e060aa38bf5208a3810b074d8610" Dec 15 06:10:34 crc kubenswrapper[4747]: I1215 06:10:34.558271 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" event={"ID":"1d50e5c9-7ce9-40c0-b942-01031654d27c","Type":"ContainerStarted","Data":"792cda812eb2119b7ff0b41b927687bb15e0e2fd42f24ffa26a56782c6542e51"} Dec 15 06:11:50 crc kubenswrapper[4747]: I1215 06:11:50.290455 4747 generic.go:334] "Generic (PLEG): container finished" podID="6a04d0c3-49fa-44ad-ab27-08ba583d1142" containerID="f085c8e23d65a40ecc89fb1cbfae9e848d673857be084beea03a3c21375e8038" exitCode=0 Dec 15 06:11:50 crc kubenswrapper[4747]: I1215 06:11:50.290541 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v7cvg" event={"ID":"6a04d0c3-49fa-44ad-ab27-08ba583d1142","Type":"ContainerDied","Data":"f085c8e23d65a40ecc89fb1cbfae9e848d673857be084beea03a3c21375e8038"} Dec 15 06:11:51 crc kubenswrapper[4747]: I1215 06:11:51.644994 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v7cvg" Dec 15 06:11:51 crc kubenswrapper[4747]: I1215 06:11:51.793820 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6a04d0c3-49fa-44ad-ab27-08ba583d1142-ssh-key\") pod \"6a04d0c3-49fa-44ad-ab27-08ba583d1142\" (UID: \"6a04d0c3-49fa-44ad-ab27-08ba583d1142\") " Dec 15 06:11:51 crc kubenswrapper[4747]: I1215 06:11:51.793967 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a04d0c3-49fa-44ad-ab27-08ba583d1142-nova-combined-ca-bundle\") pod \"6a04d0c3-49fa-44ad-ab27-08ba583d1142\" (UID: \"6a04d0c3-49fa-44ad-ab27-08ba583d1142\") " Dec 15 06:11:51 crc kubenswrapper[4747]: I1215 06:11:51.794086 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6a04d0c3-49fa-44ad-ab27-08ba583d1142-nova-cell1-compute-config-0\") pod \"6a04d0c3-49fa-44ad-ab27-08ba583d1142\" (UID: \"6a04d0c3-49fa-44ad-ab27-08ba583d1142\") " Dec 15 06:11:51 crc kubenswrapper[4747]: I1215 06:11:51.794196 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a04d0c3-49fa-44ad-ab27-08ba583d1142-inventory\") pod \"6a04d0c3-49fa-44ad-ab27-08ba583d1142\" (UID: \"6a04d0c3-49fa-44ad-ab27-08ba583d1142\") " Dec 15 06:11:51 crc kubenswrapper[4747]: I1215 06:11:51.794245 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjhpq\" (UniqueName: \"kubernetes.io/projected/6a04d0c3-49fa-44ad-ab27-08ba583d1142-kube-api-access-fjhpq\") pod \"6a04d0c3-49fa-44ad-ab27-08ba583d1142\" (UID: \"6a04d0c3-49fa-44ad-ab27-08ba583d1142\") " Dec 15 06:11:51 crc kubenswrapper[4747]: I1215 06:11:51.794309 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6a04d0c3-49fa-44ad-ab27-08ba583d1142-nova-migration-ssh-key-0\") pod \"6a04d0c3-49fa-44ad-ab27-08ba583d1142\" (UID: \"6a04d0c3-49fa-44ad-ab27-08ba583d1142\") " Dec 15 06:11:51 crc kubenswrapper[4747]: I1215 06:11:51.794345 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6a04d0c3-49fa-44ad-ab27-08ba583d1142-nova-migration-ssh-key-1\") pod \"6a04d0c3-49fa-44ad-ab27-08ba583d1142\" (UID: \"6a04d0c3-49fa-44ad-ab27-08ba583d1142\") " Dec 15 06:11:51 crc kubenswrapper[4747]: I1215 06:11:51.794818 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6a04d0c3-49fa-44ad-ab27-08ba583d1142-nova-extra-config-0\") pod \"6a04d0c3-49fa-44ad-ab27-08ba583d1142\" (UID: \"6a04d0c3-49fa-44ad-ab27-08ba583d1142\") " Dec 15 06:11:51 crc kubenswrapper[4747]: I1215 06:11:51.795053 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6a04d0c3-49fa-44ad-ab27-08ba583d1142-nova-cell1-compute-config-1\") pod \"6a04d0c3-49fa-44ad-ab27-08ba583d1142\" (UID: \"6a04d0c3-49fa-44ad-ab27-08ba583d1142\") " Dec 15 06:11:51 crc kubenswrapper[4747]: I1215 06:11:51.802185 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a04d0c3-49fa-44ad-ab27-08ba583d1142-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "6a04d0c3-49fa-44ad-ab27-08ba583d1142" (UID: "6a04d0c3-49fa-44ad-ab27-08ba583d1142"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:11:51 crc kubenswrapper[4747]: I1215 06:11:51.803145 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a04d0c3-49fa-44ad-ab27-08ba583d1142-kube-api-access-fjhpq" (OuterVolumeSpecName: "kube-api-access-fjhpq") pod "6a04d0c3-49fa-44ad-ab27-08ba583d1142" (UID: "6a04d0c3-49fa-44ad-ab27-08ba583d1142"). InnerVolumeSpecName "kube-api-access-fjhpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 06:11:51 crc kubenswrapper[4747]: I1215 06:11:51.822217 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a04d0c3-49fa-44ad-ab27-08ba583d1142-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "6a04d0c3-49fa-44ad-ab27-08ba583d1142" (UID: "6a04d0c3-49fa-44ad-ab27-08ba583d1142"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 06:11:51 crc kubenswrapper[4747]: I1215 06:11:51.824966 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a04d0c3-49fa-44ad-ab27-08ba583d1142-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "6a04d0c3-49fa-44ad-ab27-08ba583d1142" (UID: "6a04d0c3-49fa-44ad-ab27-08ba583d1142"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:11:51 crc kubenswrapper[4747]: I1215 06:11:51.829144 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a04d0c3-49fa-44ad-ab27-08ba583d1142-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "6a04d0c3-49fa-44ad-ab27-08ba583d1142" (UID: "6a04d0c3-49fa-44ad-ab27-08ba583d1142"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:11:51 crc kubenswrapper[4747]: I1215 06:11:51.829910 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a04d0c3-49fa-44ad-ab27-08ba583d1142-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "6a04d0c3-49fa-44ad-ab27-08ba583d1142" (UID: "6a04d0c3-49fa-44ad-ab27-08ba583d1142"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:11:51 crc kubenswrapper[4747]: I1215 06:11:51.829956 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a04d0c3-49fa-44ad-ab27-08ba583d1142-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6a04d0c3-49fa-44ad-ab27-08ba583d1142" (UID: "6a04d0c3-49fa-44ad-ab27-08ba583d1142"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:11:51 crc kubenswrapper[4747]: I1215 06:11:51.831315 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a04d0c3-49fa-44ad-ab27-08ba583d1142-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "6a04d0c3-49fa-44ad-ab27-08ba583d1142" (UID: "6a04d0c3-49fa-44ad-ab27-08ba583d1142"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:11:51 crc kubenswrapper[4747]: I1215 06:11:51.835231 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a04d0c3-49fa-44ad-ab27-08ba583d1142-inventory" (OuterVolumeSpecName: "inventory") pod "6a04d0c3-49fa-44ad-ab27-08ba583d1142" (UID: "6a04d0c3-49fa-44ad-ab27-08ba583d1142"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:11:51 crc kubenswrapper[4747]: I1215 06:11:51.898488 4747 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6a04d0c3-49fa-44ad-ab27-08ba583d1142-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 15 06:11:51 crc kubenswrapper[4747]: I1215 06:11:51.898529 4747 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a04d0c3-49fa-44ad-ab27-08ba583d1142-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 06:11:51 crc kubenswrapper[4747]: I1215 06:11:51.898547 4747 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6a04d0c3-49fa-44ad-ab27-08ba583d1142-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Dec 15 06:11:51 crc kubenswrapper[4747]: I1215 06:11:51.898612 4747 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a04d0c3-49fa-44ad-ab27-08ba583d1142-inventory\") on node \"crc\" DevicePath \"\"" Dec 15 06:11:51 crc kubenswrapper[4747]: I1215 06:11:51.898625 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjhpq\" (UniqueName: \"kubernetes.io/projected/6a04d0c3-49fa-44ad-ab27-08ba583d1142-kube-api-access-fjhpq\") on node \"crc\" DevicePath \"\"" Dec 15 06:11:51 crc kubenswrapper[4747]: I1215 06:11:51.898634 4747 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6a04d0c3-49fa-44ad-ab27-08ba583d1142-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Dec 15 06:11:51 crc kubenswrapper[4747]: I1215 06:11:51.898643 4747 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6a04d0c3-49fa-44ad-ab27-08ba583d1142-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Dec 15 06:11:51 crc kubenswrapper[4747]: I1215 06:11:51.898677 4747 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6a04d0c3-49fa-44ad-ab27-08ba583d1142-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Dec 15 06:11:51 crc kubenswrapper[4747]: I1215 06:11:51.898687 4747 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6a04d0c3-49fa-44ad-ab27-08ba583d1142-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Dec 15 06:11:52 crc kubenswrapper[4747]: I1215 06:11:52.310477 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v7cvg" event={"ID":"6a04d0c3-49fa-44ad-ab27-08ba583d1142","Type":"ContainerDied","Data":"de00c59671d0b793ef9bf8f86b024b468630a261490ad4d2dd172906a0f3bd9a"} Dec 15 06:11:52 crc kubenswrapper[4747]: I1215 06:11:52.310543 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de00c59671d0b793ef9bf8f86b024b468630a261490ad4d2dd172906a0f3bd9a" Dec 15 06:11:52 crc kubenswrapper[4747]: I1215 06:11:52.310624 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v7cvg" Dec 15 06:11:52 crc kubenswrapper[4747]: I1215 06:11:52.491858 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f674t"] Dec 15 06:11:52 crc kubenswrapper[4747]: E1215 06:11:52.492320 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a04d0c3-49fa-44ad-ab27-08ba583d1142" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 15 06:11:52 crc kubenswrapper[4747]: I1215 06:11:52.492343 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a04d0c3-49fa-44ad-ab27-08ba583d1142" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 15 06:11:52 crc kubenswrapper[4747]: I1215 06:11:52.492550 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a04d0c3-49fa-44ad-ab27-08ba583d1142" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 15 06:11:52 crc kubenswrapper[4747]: I1215 06:11:52.493288 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f674t" Dec 15 06:11:52 crc kubenswrapper[4747]: I1215 06:11:52.495529 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Dec 15 06:11:52 crc kubenswrapper[4747]: I1215 06:11:52.495782 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 15 06:11:52 crc kubenswrapper[4747]: I1215 06:11:52.496079 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bfv8q" Dec 15 06:11:52 crc kubenswrapper[4747]: I1215 06:11:52.496220 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 15 06:11:52 crc kubenswrapper[4747]: I1215 06:11:52.496369 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 15 06:11:52 crc kubenswrapper[4747]: I1215 06:11:52.500318 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f674t"] Dec 15 06:11:52 crc kubenswrapper[4747]: I1215 06:11:52.512713 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/a7d200be-a60e-4759-8772-1845c1ab0534-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f674t\" (UID: \"a7d200be-a60e-4759-8772-1845c1ab0534\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f674t" Dec 15 06:11:52 crc kubenswrapper[4747]: I1215 06:11:52.512778 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7d200be-a60e-4759-8772-1845c1ab0534-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f674t\" (UID: \"a7d200be-a60e-4759-8772-1845c1ab0534\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f674t" Dec 15 06:11:52 crc kubenswrapper[4747]: I1215 06:11:52.512836 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/a7d200be-a60e-4759-8772-1845c1ab0534-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f674t\" (UID: \"a7d200be-a60e-4759-8772-1845c1ab0534\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f674t" Dec 15 06:11:52 crc kubenswrapper[4747]: I1215 06:11:52.512921 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a7d200be-a60e-4759-8772-1845c1ab0534-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f674t\" (UID: \"a7d200be-a60e-4759-8772-1845c1ab0534\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f674t" Dec 15 06:11:52 crc kubenswrapper[4747]: I1215 06:11:52.512984 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/a7d200be-a60e-4759-8772-1845c1ab0534-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f674t\" (UID: \"a7d200be-a60e-4759-8772-1845c1ab0534\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f674t" Dec 15 06:11:52 crc kubenswrapper[4747]: I1215 06:11:52.513047 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7d200be-a60e-4759-8772-1845c1ab0534-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f674t\" (UID: \"a7d200be-a60e-4759-8772-1845c1ab0534\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f674t" Dec 15 06:11:52 crc kubenswrapper[4747]: I1215 06:11:52.513157 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bx8x\" (UniqueName: \"kubernetes.io/projected/a7d200be-a60e-4759-8772-1845c1ab0534-kube-api-access-2bx8x\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f674t\" (UID: \"a7d200be-a60e-4759-8772-1845c1ab0534\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f674t" Dec 15 06:11:52 crc kubenswrapper[4747]: I1215 06:11:52.615035 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/a7d200be-a60e-4759-8772-1845c1ab0534-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f674t\" (UID: \"a7d200be-a60e-4759-8772-1845c1ab0534\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f674t" Dec 15 06:11:52 crc kubenswrapper[4747]: I1215 06:11:52.615103 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7d200be-a60e-4759-8772-1845c1ab0534-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f674t\" (UID: \"a7d200be-a60e-4759-8772-1845c1ab0534\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f674t" Dec 15 06:11:52 crc kubenswrapper[4747]: I1215 06:11:52.615172 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/a7d200be-a60e-4759-8772-1845c1ab0534-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f674t\" (UID: \"a7d200be-a60e-4759-8772-1845c1ab0534\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f674t" Dec 15 06:11:52 crc kubenswrapper[4747]: I1215 06:11:52.615268 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a7d200be-a60e-4759-8772-1845c1ab0534-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f674t\" (UID: \"a7d200be-a60e-4759-8772-1845c1ab0534\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f674t" Dec 15 06:11:52 crc kubenswrapper[4747]: I1215 06:11:52.615316 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/a7d200be-a60e-4759-8772-1845c1ab0534-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f674t\" (UID: \"a7d200be-a60e-4759-8772-1845c1ab0534\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f674t" Dec 15 06:11:52 crc kubenswrapper[4747]: I1215 06:11:52.615381 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7d200be-a60e-4759-8772-1845c1ab0534-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f674t\" (UID: \"a7d200be-a60e-4759-8772-1845c1ab0534\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f674t" Dec 15 06:11:52 crc kubenswrapper[4747]: I1215 06:11:52.615512 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bx8x\" (UniqueName: \"kubernetes.io/projected/a7d200be-a60e-4759-8772-1845c1ab0534-kube-api-access-2bx8x\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f674t\" (UID: \"a7d200be-a60e-4759-8772-1845c1ab0534\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f674t" Dec 15 06:11:52 crc kubenswrapper[4747]: I1215 06:11:52.622121 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a7d200be-a60e-4759-8772-1845c1ab0534-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f674t\" (UID: \"a7d200be-a60e-4759-8772-1845c1ab0534\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f674t" Dec 15 06:11:52 crc kubenswrapper[4747]: I1215 06:11:52.622355 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7d200be-a60e-4759-8772-1845c1ab0534-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f674t\" (UID: \"a7d200be-a60e-4759-8772-1845c1ab0534\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f674t" Dec 15 06:11:52 crc kubenswrapper[4747]: I1215 06:11:52.622672 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/a7d200be-a60e-4759-8772-1845c1ab0534-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f674t\" (UID: \"a7d200be-a60e-4759-8772-1845c1ab0534\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f674t" Dec 15 06:11:52 crc kubenswrapper[4747]: I1215 06:11:52.622701 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/a7d200be-a60e-4759-8772-1845c1ab0534-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f674t\" (UID: \"a7d200be-a60e-4759-8772-1845c1ab0534\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f674t" Dec 15 06:11:52 crc kubenswrapper[4747]: I1215 06:11:52.622851 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7d200be-a60e-4759-8772-1845c1ab0534-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f674t\" (UID: \"a7d200be-a60e-4759-8772-1845c1ab0534\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f674t" Dec 15 06:11:52 crc kubenswrapper[4747]: I1215 06:11:52.624519 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/a7d200be-a60e-4759-8772-1845c1ab0534-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f674t\" (UID: \"a7d200be-a60e-4759-8772-1845c1ab0534\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f674t" Dec 15 06:11:52 crc kubenswrapper[4747]: I1215 06:11:52.631580 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bx8x\" (UniqueName: \"kubernetes.io/projected/a7d200be-a60e-4759-8772-1845c1ab0534-kube-api-access-2bx8x\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-f674t\" (UID: \"a7d200be-a60e-4759-8772-1845c1ab0534\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f674t" Dec 15 06:11:52 crc kubenswrapper[4747]: I1215 06:11:52.807223 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f674t" Dec 15 06:11:53 crc kubenswrapper[4747]: I1215 06:11:53.280126 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f674t"] Dec 15 06:11:53 crc kubenswrapper[4747]: I1215 06:11:53.319392 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f674t" event={"ID":"a7d200be-a60e-4759-8772-1845c1ab0534","Type":"ContainerStarted","Data":"0683d359b7aa043a02e523e161f2d53495581eebbd0bafab46005169c34e7e2a"} Dec 15 06:11:54 crc kubenswrapper[4747]: I1215 06:11:54.351309 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f674t" event={"ID":"a7d200be-a60e-4759-8772-1845c1ab0534","Type":"ContainerStarted","Data":"ce72fae8e078da4973a10d5d137b5cd88500e55b61c01bbba66e0f5a23a6fe12"} Dec 15 06:11:54 crc kubenswrapper[4747]: I1215 06:11:54.373383 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f674t" podStartSLOduration=1.622033263 podStartE2EDuration="2.373361813s" podCreationTimestamp="2025-12-15 06:11:52 +0000 UTC" firstStartedPulling="2025-12-15 06:11:53.277267583 +0000 UTC m=+2076.973779500" lastFinishedPulling="2025-12-15 06:11:54.028596133 +0000 UTC m=+2077.725108050" observedRunningTime="2025-12-15 06:11:54.366060755 +0000 UTC m=+2078.062572672" watchObservedRunningTime="2025-12-15 06:11:54.373361813 +0000 UTC m=+2078.069873729" Dec 15 06:12:37 crc kubenswrapper[4747]: I1215 06:12:37.946106 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-d4j2g"] Dec 15 06:12:37 crc kubenswrapper[4747]: I1215 06:12:37.948480 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d4j2g" Dec 15 06:12:37 crc kubenswrapper[4747]: I1215 06:12:37.988781 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d4j2g"] Dec 15 06:12:38 crc kubenswrapper[4747]: I1215 06:12:38.011540 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfm9b\" (UniqueName: \"kubernetes.io/projected/f8e659a1-b077-4547-b0e3-d06f1a0ce524-kube-api-access-wfm9b\") pod \"redhat-marketplace-d4j2g\" (UID: \"f8e659a1-b077-4547-b0e3-d06f1a0ce524\") " pod="openshift-marketplace/redhat-marketplace-d4j2g" Dec 15 06:12:38 crc kubenswrapper[4747]: I1215 06:12:38.011742 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8e659a1-b077-4547-b0e3-d06f1a0ce524-utilities\") pod \"redhat-marketplace-d4j2g\" (UID: \"f8e659a1-b077-4547-b0e3-d06f1a0ce524\") " pod="openshift-marketplace/redhat-marketplace-d4j2g" Dec 15 06:12:38 crc kubenswrapper[4747]: I1215 06:12:38.012059 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8e659a1-b077-4547-b0e3-d06f1a0ce524-catalog-content\") pod \"redhat-marketplace-d4j2g\" (UID: \"f8e659a1-b077-4547-b0e3-d06f1a0ce524\") " pod="openshift-marketplace/redhat-marketplace-d4j2g" Dec 15 06:12:38 crc kubenswrapper[4747]: I1215 06:12:38.112994 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8e659a1-b077-4547-b0e3-d06f1a0ce524-utilities\") pod \"redhat-marketplace-d4j2g\" (UID: \"f8e659a1-b077-4547-b0e3-d06f1a0ce524\") " pod="openshift-marketplace/redhat-marketplace-d4j2g" Dec 15 06:12:38 crc kubenswrapper[4747]: I1215 06:12:38.113143 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8e659a1-b077-4547-b0e3-d06f1a0ce524-catalog-content\") pod \"redhat-marketplace-d4j2g\" (UID: \"f8e659a1-b077-4547-b0e3-d06f1a0ce524\") " pod="openshift-marketplace/redhat-marketplace-d4j2g" Dec 15 06:12:38 crc kubenswrapper[4747]: I1215 06:12:38.113237 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfm9b\" (UniqueName: \"kubernetes.io/projected/f8e659a1-b077-4547-b0e3-d06f1a0ce524-kube-api-access-wfm9b\") pod \"redhat-marketplace-d4j2g\" (UID: \"f8e659a1-b077-4547-b0e3-d06f1a0ce524\") " pod="openshift-marketplace/redhat-marketplace-d4j2g" Dec 15 06:12:38 crc kubenswrapper[4747]: I1215 06:12:38.113547 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8e659a1-b077-4547-b0e3-d06f1a0ce524-utilities\") pod \"redhat-marketplace-d4j2g\" (UID: \"f8e659a1-b077-4547-b0e3-d06f1a0ce524\") " pod="openshift-marketplace/redhat-marketplace-d4j2g" Dec 15 06:12:38 crc kubenswrapper[4747]: I1215 06:12:38.113583 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8e659a1-b077-4547-b0e3-d06f1a0ce524-catalog-content\") pod \"redhat-marketplace-d4j2g\" (UID: \"f8e659a1-b077-4547-b0e3-d06f1a0ce524\") " pod="openshift-marketplace/redhat-marketplace-d4j2g" Dec 15 06:12:38 crc kubenswrapper[4747]: I1215 06:12:38.130564 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfm9b\" (UniqueName: \"kubernetes.io/projected/f8e659a1-b077-4547-b0e3-d06f1a0ce524-kube-api-access-wfm9b\") pod \"redhat-marketplace-d4j2g\" (UID: \"f8e659a1-b077-4547-b0e3-d06f1a0ce524\") " pod="openshift-marketplace/redhat-marketplace-d4j2g" Dec 15 06:12:38 crc kubenswrapper[4747]: I1215 06:12:38.266150 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d4j2g" Dec 15 06:12:38 crc kubenswrapper[4747]: I1215 06:12:38.704118 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d4j2g"] Dec 15 06:12:38 crc kubenswrapper[4747]: I1215 06:12:38.767094 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d4j2g" event={"ID":"f8e659a1-b077-4547-b0e3-d06f1a0ce524","Type":"ContainerStarted","Data":"7ef6bb7be1cbb838e70aa20bfa74d5dad7ee6403f9e50ea8e61c444974ac29e2"} Dec 15 06:12:39 crc kubenswrapper[4747]: I1215 06:12:39.777671 4747 generic.go:334] "Generic (PLEG): container finished" podID="f8e659a1-b077-4547-b0e3-d06f1a0ce524" containerID="70b06c45720a3c6a922d0387291ba5b557024039ab7d9db64c6f914567587de2" exitCode=0 Dec 15 06:12:39 crc kubenswrapper[4747]: I1215 06:12:39.777751 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d4j2g" event={"ID":"f8e659a1-b077-4547-b0e3-d06f1a0ce524","Type":"ContainerDied","Data":"70b06c45720a3c6a922d0387291ba5b557024039ab7d9db64c6f914567587de2"} Dec 15 06:12:41 crc kubenswrapper[4747]: I1215 06:12:41.797971 4747 generic.go:334] "Generic (PLEG): container finished" podID="f8e659a1-b077-4547-b0e3-d06f1a0ce524" containerID="65a8fb95e487083145edb98ad3b0a76137f4d28e8a15d676483ba54ef4bdee74" exitCode=0 Dec 15 06:12:41 crc kubenswrapper[4747]: I1215 06:12:41.798027 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d4j2g" event={"ID":"f8e659a1-b077-4547-b0e3-d06f1a0ce524","Type":"ContainerDied","Data":"65a8fb95e487083145edb98ad3b0a76137f4d28e8a15d676483ba54ef4bdee74"} Dec 15 06:12:42 crc kubenswrapper[4747]: I1215 06:12:42.814670 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d4j2g" event={"ID":"f8e659a1-b077-4547-b0e3-d06f1a0ce524","Type":"ContainerStarted","Data":"a7443d83e811a753bc7335025e3f18362cd5ed3562ffca69cd42de3f9aed47a5"} Dec 15 06:12:42 crc kubenswrapper[4747]: I1215 06:12:42.836137 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-d4j2g" podStartSLOduration=3.295387661 podStartE2EDuration="5.836120005s" podCreationTimestamp="2025-12-15 06:12:37 +0000 UTC" firstStartedPulling="2025-12-15 06:12:39.780002035 +0000 UTC m=+2123.476513952" lastFinishedPulling="2025-12-15 06:12:42.320734379 +0000 UTC m=+2126.017246296" observedRunningTime="2025-12-15 06:12:42.830740061 +0000 UTC m=+2126.527251967" watchObservedRunningTime="2025-12-15 06:12:42.836120005 +0000 UTC m=+2126.532631922" Dec 15 06:12:48 crc kubenswrapper[4747]: I1215 06:12:48.267296 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-d4j2g" Dec 15 06:12:48 crc kubenswrapper[4747]: I1215 06:12:48.268053 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-d4j2g" Dec 15 06:12:48 crc kubenswrapper[4747]: I1215 06:12:48.305436 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-d4j2g" Dec 15 06:12:48 crc kubenswrapper[4747]: I1215 06:12:48.918341 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-d4j2g" Dec 15 06:12:48 crc kubenswrapper[4747]: I1215 06:12:48.965258 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d4j2g"] Dec 15 06:12:50 crc kubenswrapper[4747]: I1215 06:12:50.899288 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-d4j2g" podUID="f8e659a1-b077-4547-b0e3-d06f1a0ce524" containerName="registry-server" containerID="cri-o://a7443d83e811a753bc7335025e3f18362cd5ed3562ffca69cd42de3f9aed47a5" gracePeriod=2 Dec 15 06:12:51 crc kubenswrapper[4747]: I1215 06:12:51.297992 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d4j2g" Dec 15 06:12:51 crc kubenswrapper[4747]: I1215 06:12:51.491270 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfm9b\" (UniqueName: \"kubernetes.io/projected/f8e659a1-b077-4547-b0e3-d06f1a0ce524-kube-api-access-wfm9b\") pod \"f8e659a1-b077-4547-b0e3-d06f1a0ce524\" (UID: \"f8e659a1-b077-4547-b0e3-d06f1a0ce524\") " Dec 15 06:12:51 crc kubenswrapper[4747]: I1215 06:12:51.491327 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8e659a1-b077-4547-b0e3-d06f1a0ce524-utilities\") pod \"f8e659a1-b077-4547-b0e3-d06f1a0ce524\" (UID: \"f8e659a1-b077-4547-b0e3-d06f1a0ce524\") " Dec 15 06:12:51 crc kubenswrapper[4747]: I1215 06:12:51.491378 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8e659a1-b077-4547-b0e3-d06f1a0ce524-catalog-content\") pod \"f8e659a1-b077-4547-b0e3-d06f1a0ce524\" (UID: \"f8e659a1-b077-4547-b0e3-d06f1a0ce524\") " Dec 15 06:12:51 crc kubenswrapper[4747]: I1215 06:12:51.492368 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8e659a1-b077-4547-b0e3-d06f1a0ce524-utilities" (OuterVolumeSpecName: "utilities") pod "f8e659a1-b077-4547-b0e3-d06f1a0ce524" (UID: "f8e659a1-b077-4547-b0e3-d06f1a0ce524"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 06:12:51 crc kubenswrapper[4747]: I1215 06:12:51.498351 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8e659a1-b077-4547-b0e3-d06f1a0ce524-kube-api-access-wfm9b" (OuterVolumeSpecName: "kube-api-access-wfm9b") pod "f8e659a1-b077-4547-b0e3-d06f1a0ce524" (UID: "f8e659a1-b077-4547-b0e3-d06f1a0ce524"). InnerVolumeSpecName "kube-api-access-wfm9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 06:12:51 crc kubenswrapper[4747]: I1215 06:12:51.508791 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8e659a1-b077-4547-b0e3-d06f1a0ce524-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f8e659a1-b077-4547-b0e3-d06f1a0ce524" (UID: "f8e659a1-b077-4547-b0e3-d06f1a0ce524"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 06:12:51 crc kubenswrapper[4747]: I1215 06:12:51.593835 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfm9b\" (UniqueName: \"kubernetes.io/projected/f8e659a1-b077-4547-b0e3-d06f1a0ce524-kube-api-access-wfm9b\") on node \"crc\" DevicePath \"\"" Dec 15 06:12:51 crc kubenswrapper[4747]: I1215 06:12:51.593876 4747 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8e659a1-b077-4547-b0e3-d06f1a0ce524-utilities\") on node \"crc\" DevicePath \"\"" Dec 15 06:12:51 crc kubenswrapper[4747]: I1215 06:12:51.593891 4747 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8e659a1-b077-4547-b0e3-d06f1a0ce524-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 15 06:12:51 crc kubenswrapper[4747]: I1215 06:12:51.911045 4747 generic.go:334] "Generic (PLEG): container finished" podID="f8e659a1-b077-4547-b0e3-d06f1a0ce524" containerID="a7443d83e811a753bc7335025e3f18362cd5ed3562ffca69cd42de3f9aed47a5" exitCode=0 Dec 15 06:12:51 crc kubenswrapper[4747]: I1215 06:12:51.911105 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d4j2g" Dec 15 06:12:51 crc kubenswrapper[4747]: I1215 06:12:51.911109 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d4j2g" event={"ID":"f8e659a1-b077-4547-b0e3-d06f1a0ce524","Type":"ContainerDied","Data":"a7443d83e811a753bc7335025e3f18362cd5ed3562ffca69cd42de3f9aed47a5"} Dec 15 06:12:51 crc kubenswrapper[4747]: I1215 06:12:51.911157 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d4j2g" event={"ID":"f8e659a1-b077-4547-b0e3-d06f1a0ce524","Type":"ContainerDied","Data":"7ef6bb7be1cbb838e70aa20bfa74d5dad7ee6403f9e50ea8e61c444974ac29e2"} Dec 15 06:12:51 crc kubenswrapper[4747]: I1215 06:12:51.911180 4747 scope.go:117] "RemoveContainer" containerID="a7443d83e811a753bc7335025e3f18362cd5ed3562ffca69cd42de3f9aed47a5" Dec 15 06:12:51 crc kubenswrapper[4747]: I1215 06:12:51.936109 4747 scope.go:117] "RemoveContainer" containerID="65a8fb95e487083145edb98ad3b0a76137f4d28e8a15d676483ba54ef4bdee74" Dec 15 06:12:51 crc kubenswrapper[4747]: I1215 06:12:51.943584 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d4j2g"] Dec 15 06:12:51 crc kubenswrapper[4747]: I1215 06:12:51.951184 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-d4j2g"] Dec 15 06:12:51 crc kubenswrapper[4747]: I1215 06:12:51.968114 4747 scope.go:117] "RemoveContainer" containerID="70b06c45720a3c6a922d0387291ba5b557024039ab7d9db64c6f914567587de2" Dec 15 06:12:51 crc kubenswrapper[4747]: I1215 06:12:51.987660 4747 scope.go:117] "RemoveContainer" containerID="a7443d83e811a753bc7335025e3f18362cd5ed3562ffca69cd42de3f9aed47a5" Dec 15 06:12:51 crc kubenswrapper[4747]: E1215 06:12:51.988065 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7443d83e811a753bc7335025e3f18362cd5ed3562ffca69cd42de3f9aed47a5\": container with ID starting with a7443d83e811a753bc7335025e3f18362cd5ed3562ffca69cd42de3f9aed47a5 not found: ID does not exist" containerID="a7443d83e811a753bc7335025e3f18362cd5ed3562ffca69cd42de3f9aed47a5" Dec 15 06:12:51 crc kubenswrapper[4747]: I1215 06:12:51.988112 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7443d83e811a753bc7335025e3f18362cd5ed3562ffca69cd42de3f9aed47a5"} err="failed to get container status \"a7443d83e811a753bc7335025e3f18362cd5ed3562ffca69cd42de3f9aed47a5\": rpc error: code = NotFound desc = could not find container \"a7443d83e811a753bc7335025e3f18362cd5ed3562ffca69cd42de3f9aed47a5\": container with ID starting with a7443d83e811a753bc7335025e3f18362cd5ed3562ffca69cd42de3f9aed47a5 not found: ID does not exist" Dec 15 06:12:51 crc kubenswrapper[4747]: I1215 06:12:51.988137 4747 scope.go:117] "RemoveContainer" containerID="65a8fb95e487083145edb98ad3b0a76137f4d28e8a15d676483ba54ef4bdee74" Dec 15 06:12:51 crc kubenswrapper[4747]: E1215 06:12:51.989218 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65a8fb95e487083145edb98ad3b0a76137f4d28e8a15d676483ba54ef4bdee74\": container with ID starting with 65a8fb95e487083145edb98ad3b0a76137f4d28e8a15d676483ba54ef4bdee74 not found: ID does not exist" containerID="65a8fb95e487083145edb98ad3b0a76137f4d28e8a15d676483ba54ef4bdee74" Dec 15 06:12:51 crc kubenswrapper[4747]: I1215 06:12:51.989263 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65a8fb95e487083145edb98ad3b0a76137f4d28e8a15d676483ba54ef4bdee74"} err="failed to get container status \"65a8fb95e487083145edb98ad3b0a76137f4d28e8a15d676483ba54ef4bdee74\": rpc error: code = NotFound desc = could not find container \"65a8fb95e487083145edb98ad3b0a76137f4d28e8a15d676483ba54ef4bdee74\": container with ID starting with 65a8fb95e487083145edb98ad3b0a76137f4d28e8a15d676483ba54ef4bdee74 not found: ID does not exist" Dec 15 06:12:51 crc kubenswrapper[4747]: I1215 06:12:51.989293 4747 scope.go:117] "RemoveContainer" containerID="70b06c45720a3c6a922d0387291ba5b557024039ab7d9db64c6f914567587de2" Dec 15 06:12:51 crc kubenswrapper[4747]: E1215 06:12:51.989627 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70b06c45720a3c6a922d0387291ba5b557024039ab7d9db64c6f914567587de2\": container with ID starting with 70b06c45720a3c6a922d0387291ba5b557024039ab7d9db64c6f914567587de2 not found: ID does not exist" containerID="70b06c45720a3c6a922d0387291ba5b557024039ab7d9db64c6f914567587de2" Dec 15 06:12:51 crc kubenswrapper[4747]: I1215 06:12:51.989662 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70b06c45720a3c6a922d0387291ba5b557024039ab7d9db64c6f914567587de2"} err="failed to get container status \"70b06c45720a3c6a922d0387291ba5b557024039ab7d9db64c6f914567587de2\": rpc error: code = NotFound desc = could not find container \"70b06c45720a3c6a922d0387291ba5b557024039ab7d9db64c6f914567587de2\": container with ID starting with 70b06c45720a3c6a922d0387291ba5b557024039ab7d9db64c6f914567587de2 not found: ID does not exist" Dec 15 06:12:52 crc kubenswrapper[4747]: I1215 06:12:52.639468 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8e659a1-b077-4547-b0e3-d06f1a0ce524" path="/var/lib/kubelet/pods/f8e659a1-b077-4547-b0e3-d06f1a0ce524/volumes" Dec 15 06:12:58 crc kubenswrapper[4747]: I1215 06:12:58.865394 4747 patch_prober.go:28] interesting pod/machine-config-daemon-nldtn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 15 06:12:58 crc kubenswrapper[4747]: I1215 06:12:58.866061 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 15 06:13:28 crc kubenswrapper[4747]: I1215 06:13:28.865728 4747 patch_prober.go:28] interesting pod/machine-config-daemon-nldtn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 15 06:13:28 crc kubenswrapper[4747]: I1215 06:13:28.866457 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 15 06:13:30 crc kubenswrapper[4747]: I1215 06:13:30.264430 4747 generic.go:334] "Generic (PLEG): container finished" podID="a7d200be-a60e-4759-8772-1845c1ab0534" containerID="ce72fae8e078da4973a10d5d137b5cd88500e55b61c01bbba66e0f5a23a6fe12" exitCode=0 Dec 15 06:13:30 crc kubenswrapper[4747]: I1215 06:13:30.264528 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f674t" event={"ID":"a7d200be-a60e-4759-8772-1845c1ab0534","Type":"ContainerDied","Data":"ce72fae8e078da4973a10d5d137b5cd88500e55b61c01bbba66e0f5a23a6fe12"} Dec 15 06:13:31 crc kubenswrapper[4747]: I1215 06:13:31.580329 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f674t" Dec 15 06:13:31 crc kubenswrapper[4747]: I1215 06:13:31.770902 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bx8x\" (UniqueName: \"kubernetes.io/projected/a7d200be-a60e-4759-8772-1845c1ab0534-kube-api-access-2bx8x\") pod \"a7d200be-a60e-4759-8772-1845c1ab0534\" (UID: \"a7d200be-a60e-4759-8772-1845c1ab0534\") " Dec 15 06:13:31 crc kubenswrapper[4747]: I1215 06:13:31.771016 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7d200be-a60e-4759-8772-1845c1ab0534-telemetry-combined-ca-bundle\") pod \"a7d200be-a60e-4759-8772-1845c1ab0534\" (UID: \"a7d200be-a60e-4759-8772-1845c1ab0534\") " Dec 15 06:13:31 crc kubenswrapper[4747]: I1215 06:13:31.771079 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7d200be-a60e-4759-8772-1845c1ab0534-inventory\") pod \"a7d200be-a60e-4759-8772-1845c1ab0534\" (UID: \"a7d200be-a60e-4759-8772-1845c1ab0534\") " Dec 15 06:13:31 crc kubenswrapper[4747]: I1215 06:13:31.771124 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a7d200be-a60e-4759-8772-1845c1ab0534-ssh-key\") pod \"a7d200be-a60e-4759-8772-1845c1ab0534\" (UID: \"a7d200be-a60e-4759-8772-1845c1ab0534\") " Dec 15 06:13:31 crc kubenswrapper[4747]: I1215 06:13:31.771156 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/a7d200be-a60e-4759-8772-1845c1ab0534-ceilometer-compute-config-data-2\") pod \"a7d200be-a60e-4759-8772-1845c1ab0534\" (UID: \"a7d200be-a60e-4759-8772-1845c1ab0534\") " Dec 15 06:13:31 crc kubenswrapper[4747]: I1215 06:13:31.771204 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/a7d200be-a60e-4759-8772-1845c1ab0534-ceilometer-compute-config-data-1\") pod \"a7d200be-a60e-4759-8772-1845c1ab0534\" (UID: \"a7d200be-a60e-4759-8772-1845c1ab0534\") " Dec 15 06:13:31 crc kubenswrapper[4747]: I1215 06:13:31.771239 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/a7d200be-a60e-4759-8772-1845c1ab0534-ceilometer-compute-config-data-0\") pod \"a7d200be-a60e-4759-8772-1845c1ab0534\" (UID: \"a7d200be-a60e-4759-8772-1845c1ab0534\") " Dec 15 06:13:31 crc kubenswrapper[4747]: I1215 06:13:31.778710 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7d200be-a60e-4759-8772-1845c1ab0534-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "a7d200be-a60e-4759-8772-1845c1ab0534" (UID: "a7d200be-a60e-4759-8772-1845c1ab0534"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:13:31 crc kubenswrapper[4747]: I1215 06:13:31.779423 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7d200be-a60e-4759-8772-1845c1ab0534-kube-api-access-2bx8x" (OuterVolumeSpecName: "kube-api-access-2bx8x") pod "a7d200be-a60e-4759-8772-1845c1ab0534" (UID: "a7d200be-a60e-4759-8772-1845c1ab0534"). InnerVolumeSpecName "kube-api-access-2bx8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 06:13:31 crc kubenswrapper[4747]: I1215 06:13:31.800143 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7d200be-a60e-4759-8772-1845c1ab0534-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "a7d200be-a60e-4759-8772-1845c1ab0534" (UID: "a7d200be-a60e-4759-8772-1845c1ab0534"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:13:31 crc kubenswrapper[4747]: I1215 06:13:31.800804 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7d200be-a60e-4759-8772-1845c1ab0534-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "a7d200be-a60e-4759-8772-1845c1ab0534" (UID: "a7d200be-a60e-4759-8772-1845c1ab0534"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:13:31 crc kubenswrapper[4747]: I1215 06:13:31.800827 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7d200be-a60e-4759-8772-1845c1ab0534-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "a7d200be-a60e-4759-8772-1845c1ab0534" (UID: "a7d200be-a60e-4759-8772-1845c1ab0534"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:13:31 crc kubenswrapper[4747]: I1215 06:13:31.801593 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7d200be-a60e-4759-8772-1845c1ab0534-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a7d200be-a60e-4759-8772-1845c1ab0534" (UID: "a7d200be-a60e-4759-8772-1845c1ab0534"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:13:31 crc kubenswrapper[4747]: I1215 06:13:31.801693 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7d200be-a60e-4759-8772-1845c1ab0534-inventory" (OuterVolumeSpecName: "inventory") pod "a7d200be-a60e-4759-8772-1845c1ab0534" (UID: "a7d200be-a60e-4759-8772-1845c1ab0534"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:13:31 crc kubenswrapper[4747]: I1215 06:13:31.874677 4747 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7d200be-a60e-4759-8772-1845c1ab0534-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 15 06:13:31 crc kubenswrapper[4747]: I1215 06:13:31.874710 4747 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7d200be-a60e-4759-8772-1845c1ab0534-inventory\") on node \"crc\" DevicePath \"\"" Dec 15 06:13:31 crc kubenswrapper[4747]: I1215 06:13:31.874720 4747 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a7d200be-a60e-4759-8772-1845c1ab0534-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 15 06:13:31 crc kubenswrapper[4747]: I1215 06:13:31.874733 4747 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/a7d200be-a60e-4759-8772-1845c1ab0534-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Dec 15 06:13:31 crc kubenswrapper[4747]: I1215 06:13:31.874745 4747 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/a7d200be-a60e-4759-8772-1845c1ab0534-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Dec 15 06:13:31 crc kubenswrapper[4747]: I1215 06:13:31.874757 4747 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/a7d200be-a60e-4759-8772-1845c1ab0534-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Dec 15 06:13:31 crc kubenswrapper[4747]: I1215 06:13:31.874767 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bx8x\" (UniqueName: \"kubernetes.io/projected/a7d200be-a60e-4759-8772-1845c1ab0534-kube-api-access-2bx8x\") on node \"crc\" DevicePath \"\"" Dec 15 06:13:32 crc kubenswrapper[4747]: I1215 06:13:32.285762 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f674t" event={"ID":"a7d200be-a60e-4759-8772-1845c1ab0534","Type":"ContainerDied","Data":"0683d359b7aa043a02e523e161f2d53495581eebbd0bafab46005169c34e7e2a"} Dec 15 06:13:32 crc kubenswrapper[4747]: I1215 06:13:32.285821 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0683d359b7aa043a02e523e161f2d53495581eebbd0bafab46005169c34e7e2a" Dec 15 06:13:32 crc kubenswrapper[4747]: I1215 06:13:32.285902 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-f674t" Dec 15 06:13:35 crc kubenswrapper[4747]: I1215 06:13:35.296374 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pndcb"] Dec 15 06:13:35 crc kubenswrapper[4747]: E1215 06:13:35.298457 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8e659a1-b077-4547-b0e3-d06f1a0ce524" containerName="registry-server" Dec 15 06:13:35 crc kubenswrapper[4747]: I1215 06:13:35.298584 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8e659a1-b077-4547-b0e3-d06f1a0ce524" containerName="registry-server" Dec 15 06:13:35 crc kubenswrapper[4747]: E1215 06:13:35.298660 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8e659a1-b077-4547-b0e3-d06f1a0ce524" containerName="extract-utilities" Dec 15 06:13:35 crc kubenswrapper[4747]: I1215 06:13:35.298722 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8e659a1-b077-4547-b0e3-d06f1a0ce524" containerName="extract-utilities" Dec 15 06:13:35 crc kubenswrapper[4747]: E1215 06:13:35.298782 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8e659a1-b077-4547-b0e3-d06f1a0ce524" containerName="extract-content" Dec 15 06:13:35 crc kubenswrapper[4747]: I1215 06:13:35.298834 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8e659a1-b077-4547-b0e3-d06f1a0ce524" containerName="extract-content" Dec 15 06:13:35 crc kubenswrapper[4747]: E1215 06:13:35.298907 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7d200be-a60e-4759-8772-1845c1ab0534" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 15 06:13:35 crc kubenswrapper[4747]: I1215 06:13:35.298992 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7d200be-a60e-4759-8772-1845c1ab0534" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 15 06:13:35 crc kubenswrapper[4747]: I1215 06:13:35.299293 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7d200be-a60e-4759-8772-1845c1ab0534" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 15 06:13:35 crc kubenswrapper[4747]: I1215 06:13:35.300088 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8e659a1-b077-4547-b0e3-d06f1a0ce524" containerName="registry-server" Dec 15 06:13:35 crc kubenswrapper[4747]: I1215 06:13:35.301729 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pndcb" Dec 15 06:13:35 crc kubenswrapper[4747]: I1215 06:13:35.305400 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pndcb"] Dec 15 06:13:35 crc kubenswrapper[4747]: I1215 06:13:35.358819 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0c808b4-a500-4d73-9e5d-6c100ebecccd-utilities\") pod \"community-operators-pndcb\" (UID: \"a0c808b4-a500-4d73-9e5d-6c100ebecccd\") " pod="openshift-marketplace/community-operators-pndcb" Dec 15 06:13:35 crc kubenswrapper[4747]: I1215 06:13:35.358957 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd98c\" (UniqueName: \"kubernetes.io/projected/a0c808b4-a500-4d73-9e5d-6c100ebecccd-kube-api-access-zd98c\") pod \"community-operators-pndcb\" (UID: \"a0c808b4-a500-4d73-9e5d-6c100ebecccd\") " pod="openshift-marketplace/community-operators-pndcb" Dec 15 06:13:35 crc kubenswrapper[4747]: I1215 06:13:35.359126 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0c808b4-a500-4d73-9e5d-6c100ebecccd-catalog-content\") pod \"community-operators-pndcb\" (UID: \"a0c808b4-a500-4d73-9e5d-6c100ebecccd\") " pod="openshift-marketplace/community-operators-pndcb" Dec 15 06:13:35 crc kubenswrapper[4747]: I1215 06:13:35.461668 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0c808b4-a500-4d73-9e5d-6c100ebecccd-catalog-content\") pod \"community-operators-pndcb\" (UID: \"a0c808b4-a500-4d73-9e5d-6c100ebecccd\") " pod="openshift-marketplace/community-operators-pndcb" Dec 15 06:13:35 crc kubenswrapper[4747]: I1215 06:13:35.461954 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0c808b4-a500-4d73-9e5d-6c100ebecccd-utilities\") pod \"community-operators-pndcb\" (UID: \"a0c808b4-a500-4d73-9e5d-6c100ebecccd\") " pod="openshift-marketplace/community-operators-pndcb" Dec 15 06:13:35 crc kubenswrapper[4747]: I1215 06:13:35.462254 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd98c\" (UniqueName: \"kubernetes.io/projected/a0c808b4-a500-4d73-9e5d-6c100ebecccd-kube-api-access-zd98c\") pod \"community-operators-pndcb\" (UID: \"a0c808b4-a500-4d73-9e5d-6c100ebecccd\") " pod="openshift-marketplace/community-operators-pndcb" Dec 15 06:13:35 crc kubenswrapper[4747]: I1215 06:13:35.462395 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0c808b4-a500-4d73-9e5d-6c100ebecccd-utilities\") pod \"community-operators-pndcb\" (UID: \"a0c808b4-a500-4d73-9e5d-6c100ebecccd\") " pod="openshift-marketplace/community-operators-pndcb" Dec 15 06:13:35 crc kubenswrapper[4747]: I1215 06:13:35.462247 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0c808b4-a500-4d73-9e5d-6c100ebecccd-catalog-content\") pod \"community-operators-pndcb\" (UID: \"a0c808b4-a500-4d73-9e5d-6c100ebecccd\") " pod="openshift-marketplace/community-operators-pndcb" Dec 15 06:13:35 crc kubenswrapper[4747]: I1215 06:13:35.480170 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd98c\" (UniqueName: \"kubernetes.io/projected/a0c808b4-a500-4d73-9e5d-6c100ebecccd-kube-api-access-zd98c\") pod \"community-operators-pndcb\" (UID: \"a0c808b4-a500-4d73-9e5d-6c100ebecccd\") " pod="openshift-marketplace/community-operators-pndcb" Dec 15 06:13:35 crc kubenswrapper[4747]: I1215 06:13:35.620736 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pndcb" Dec 15 06:13:36 crc kubenswrapper[4747]: I1215 06:13:36.112921 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pndcb"] Dec 15 06:13:36 crc kubenswrapper[4747]: I1215 06:13:36.324637 4747 generic.go:334] "Generic (PLEG): container finished" podID="a0c808b4-a500-4d73-9e5d-6c100ebecccd" containerID="4b1cc554d04b4129951fe98783c73e34a43516fed56bd3bc4e3d21f176bf0039" exitCode=0 Dec 15 06:13:36 crc kubenswrapper[4747]: I1215 06:13:36.324709 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pndcb" event={"ID":"a0c808b4-a500-4d73-9e5d-6c100ebecccd","Type":"ContainerDied","Data":"4b1cc554d04b4129951fe98783c73e34a43516fed56bd3bc4e3d21f176bf0039"} Dec 15 06:13:36 crc kubenswrapper[4747]: I1215 06:13:36.325192 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pndcb" event={"ID":"a0c808b4-a500-4d73-9e5d-6c100ebecccd","Type":"ContainerStarted","Data":"6f7432612a24bd9a805b333b2d7759d1d55c07baec544278c77aa42a59ce03bb"} Dec 15 06:13:36 crc kubenswrapper[4747]: I1215 06:13:36.328433 4747 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 15 06:13:37 crc kubenswrapper[4747]: I1215 06:13:37.338579 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pndcb" event={"ID":"a0c808b4-a500-4d73-9e5d-6c100ebecccd","Type":"ContainerStarted","Data":"9a77653173c47f8269981193b20c09dfa13aee1caf75d924f4106d2312aeb896"} Dec 15 06:13:38 crc kubenswrapper[4747]: I1215 06:13:38.353301 4747 generic.go:334] "Generic (PLEG): container finished" podID="a0c808b4-a500-4d73-9e5d-6c100ebecccd" containerID="9a77653173c47f8269981193b20c09dfa13aee1caf75d924f4106d2312aeb896" exitCode=0 Dec 15 06:13:38 crc kubenswrapper[4747]: I1215 06:13:38.353366 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pndcb" event={"ID":"a0c808b4-a500-4d73-9e5d-6c100ebecccd","Type":"ContainerDied","Data":"9a77653173c47f8269981193b20c09dfa13aee1caf75d924f4106d2312aeb896"} Dec 15 06:13:39 crc kubenswrapper[4747]: I1215 06:13:39.366100 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pndcb" event={"ID":"a0c808b4-a500-4d73-9e5d-6c100ebecccd","Type":"ContainerStarted","Data":"11cfc394abed3371c33bd0d42d1d5abbf9f28b6c70aec5d7c2c3691db5514641"} Dec 15 06:13:39 crc kubenswrapper[4747]: I1215 06:13:39.393418 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pndcb" podStartSLOduration=1.81713915 podStartE2EDuration="4.393395155s" podCreationTimestamp="2025-12-15 06:13:35 +0000 UTC" firstStartedPulling="2025-12-15 06:13:36.3279665 +0000 UTC m=+2180.024478416" lastFinishedPulling="2025-12-15 06:13:38.904222504 +0000 UTC m=+2182.600734421" observedRunningTime="2025-12-15 06:13:39.38364393 +0000 UTC m=+2183.080155848" watchObservedRunningTime="2025-12-15 06:13:39.393395155 +0000 UTC m=+2183.089907072" Dec 15 06:13:45 crc kubenswrapper[4747]: I1215 06:13:45.621697 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pndcb" Dec 15 06:13:45 crc kubenswrapper[4747]: I1215 06:13:45.622657 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pndcb" Dec 15 06:13:45 crc kubenswrapper[4747]: I1215 06:13:45.658740 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pndcb" Dec 15 06:13:46 crc kubenswrapper[4747]: I1215 06:13:46.481002 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pndcb" Dec 15 06:13:46 crc kubenswrapper[4747]: I1215 06:13:46.535243 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pndcb"] Dec 15 06:13:48 crc kubenswrapper[4747]: I1215 06:13:48.457462 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pndcb" podUID="a0c808b4-a500-4d73-9e5d-6c100ebecccd" containerName="registry-server" containerID="cri-o://11cfc394abed3371c33bd0d42d1d5abbf9f28b6c70aec5d7c2c3691db5514641" gracePeriod=2 Dec 15 06:13:48 crc kubenswrapper[4747]: I1215 06:13:48.843212 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pndcb" Dec 15 06:13:49 crc kubenswrapper[4747]: I1215 06:13:49.040731 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zd98c\" (UniqueName: \"kubernetes.io/projected/a0c808b4-a500-4d73-9e5d-6c100ebecccd-kube-api-access-zd98c\") pod \"a0c808b4-a500-4d73-9e5d-6c100ebecccd\" (UID: \"a0c808b4-a500-4d73-9e5d-6c100ebecccd\") " Dec 15 06:13:49 crc kubenswrapper[4747]: I1215 06:13:49.040922 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0c808b4-a500-4d73-9e5d-6c100ebecccd-utilities\") pod \"a0c808b4-a500-4d73-9e5d-6c100ebecccd\" (UID: \"a0c808b4-a500-4d73-9e5d-6c100ebecccd\") " Dec 15 06:13:49 crc kubenswrapper[4747]: I1215 06:13:49.041045 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0c808b4-a500-4d73-9e5d-6c100ebecccd-catalog-content\") pod \"a0c808b4-a500-4d73-9e5d-6c100ebecccd\" (UID: \"a0c808b4-a500-4d73-9e5d-6c100ebecccd\") " Dec 15 06:13:49 crc kubenswrapper[4747]: I1215 06:13:49.042000 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0c808b4-a500-4d73-9e5d-6c100ebecccd-utilities" (OuterVolumeSpecName: "utilities") pod "a0c808b4-a500-4d73-9e5d-6c100ebecccd" (UID: "a0c808b4-a500-4d73-9e5d-6c100ebecccd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 06:13:49 crc kubenswrapper[4747]: I1215 06:13:49.046625 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0c808b4-a500-4d73-9e5d-6c100ebecccd-kube-api-access-zd98c" (OuterVolumeSpecName: "kube-api-access-zd98c") pod "a0c808b4-a500-4d73-9e5d-6c100ebecccd" (UID: "a0c808b4-a500-4d73-9e5d-6c100ebecccd"). InnerVolumeSpecName "kube-api-access-zd98c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 06:13:49 crc kubenswrapper[4747]: I1215 06:13:49.081744 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0c808b4-a500-4d73-9e5d-6c100ebecccd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a0c808b4-a500-4d73-9e5d-6c100ebecccd" (UID: "a0c808b4-a500-4d73-9e5d-6c100ebecccd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 06:13:49 crc kubenswrapper[4747]: I1215 06:13:49.144386 4747 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0c808b4-a500-4d73-9e5d-6c100ebecccd-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 15 06:13:49 crc kubenswrapper[4747]: I1215 06:13:49.144437 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zd98c\" (UniqueName: \"kubernetes.io/projected/a0c808b4-a500-4d73-9e5d-6c100ebecccd-kube-api-access-zd98c\") on node \"crc\" DevicePath \"\"" Dec 15 06:13:49 crc kubenswrapper[4747]: I1215 06:13:49.144454 4747 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0c808b4-a500-4d73-9e5d-6c100ebecccd-utilities\") on node \"crc\" DevicePath \"\"" Dec 15 06:13:49 crc kubenswrapper[4747]: I1215 06:13:49.469150 4747 generic.go:334] "Generic (PLEG): container finished" podID="a0c808b4-a500-4d73-9e5d-6c100ebecccd" containerID="11cfc394abed3371c33bd0d42d1d5abbf9f28b6c70aec5d7c2c3691db5514641" exitCode=0 Dec 15 06:13:49 crc kubenswrapper[4747]: I1215 06:13:49.469261 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pndcb" event={"ID":"a0c808b4-a500-4d73-9e5d-6c100ebecccd","Type":"ContainerDied","Data":"11cfc394abed3371c33bd0d42d1d5abbf9f28b6c70aec5d7c2c3691db5514641"} Dec 15 06:13:49 crc kubenswrapper[4747]: I1215 06:13:49.469358 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pndcb" Dec 15 06:13:49 crc kubenswrapper[4747]: I1215 06:13:49.470763 4747 scope.go:117] "RemoveContainer" containerID="11cfc394abed3371c33bd0d42d1d5abbf9f28b6c70aec5d7c2c3691db5514641" Dec 15 06:13:49 crc kubenswrapper[4747]: I1215 06:13:49.470678 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pndcb" event={"ID":"a0c808b4-a500-4d73-9e5d-6c100ebecccd","Type":"ContainerDied","Data":"6f7432612a24bd9a805b333b2d7759d1d55c07baec544278c77aa42a59ce03bb"} Dec 15 06:13:49 crc kubenswrapper[4747]: I1215 06:13:49.497342 4747 scope.go:117] "RemoveContainer" containerID="9a77653173c47f8269981193b20c09dfa13aee1caf75d924f4106d2312aeb896" Dec 15 06:13:49 crc kubenswrapper[4747]: I1215 06:13:49.505016 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pndcb"] Dec 15 06:13:49 crc kubenswrapper[4747]: I1215 06:13:49.515375 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pndcb"] Dec 15 06:13:49 crc kubenswrapper[4747]: I1215 06:13:49.517258 4747 scope.go:117] "RemoveContainer" containerID="4b1cc554d04b4129951fe98783c73e34a43516fed56bd3bc4e3d21f176bf0039" Dec 15 06:13:49 crc kubenswrapper[4747]: I1215 06:13:49.548030 4747 scope.go:117] "RemoveContainer" containerID="11cfc394abed3371c33bd0d42d1d5abbf9f28b6c70aec5d7c2c3691db5514641" Dec 15 06:13:49 crc kubenswrapper[4747]: E1215 06:13:49.548504 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11cfc394abed3371c33bd0d42d1d5abbf9f28b6c70aec5d7c2c3691db5514641\": container with ID starting with 11cfc394abed3371c33bd0d42d1d5abbf9f28b6c70aec5d7c2c3691db5514641 not found: ID does not exist" containerID="11cfc394abed3371c33bd0d42d1d5abbf9f28b6c70aec5d7c2c3691db5514641" Dec 15 06:13:49 crc kubenswrapper[4747]: I1215 06:13:49.548537 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11cfc394abed3371c33bd0d42d1d5abbf9f28b6c70aec5d7c2c3691db5514641"} err="failed to get container status \"11cfc394abed3371c33bd0d42d1d5abbf9f28b6c70aec5d7c2c3691db5514641\": rpc error: code = NotFound desc = could not find container \"11cfc394abed3371c33bd0d42d1d5abbf9f28b6c70aec5d7c2c3691db5514641\": container with ID starting with 11cfc394abed3371c33bd0d42d1d5abbf9f28b6c70aec5d7c2c3691db5514641 not found: ID does not exist" Dec 15 06:13:49 crc kubenswrapper[4747]: I1215 06:13:49.548563 4747 scope.go:117] "RemoveContainer" containerID="9a77653173c47f8269981193b20c09dfa13aee1caf75d924f4106d2312aeb896" Dec 15 06:13:49 crc kubenswrapper[4747]: E1215 06:13:49.548950 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a77653173c47f8269981193b20c09dfa13aee1caf75d924f4106d2312aeb896\": container with ID starting with 9a77653173c47f8269981193b20c09dfa13aee1caf75d924f4106d2312aeb896 not found: ID does not exist" containerID="9a77653173c47f8269981193b20c09dfa13aee1caf75d924f4106d2312aeb896" Dec 15 06:13:49 crc kubenswrapper[4747]: I1215 06:13:49.548980 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a77653173c47f8269981193b20c09dfa13aee1caf75d924f4106d2312aeb896"} err="failed to get container status \"9a77653173c47f8269981193b20c09dfa13aee1caf75d924f4106d2312aeb896\": rpc error: code = NotFound desc = could not find container \"9a77653173c47f8269981193b20c09dfa13aee1caf75d924f4106d2312aeb896\": container with ID starting with 9a77653173c47f8269981193b20c09dfa13aee1caf75d924f4106d2312aeb896 not found: ID does not exist" Dec 15 06:13:49 crc kubenswrapper[4747]: I1215 06:13:49.549002 4747 scope.go:117] "RemoveContainer" containerID="4b1cc554d04b4129951fe98783c73e34a43516fed56bd3bc4e3d21f176bf0039" Dec 15 06:13:49 crc kubenswrapper[4747]: E1215 06:13:49.549313 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b1cc554d04b4129951fe98783c73e34a43516fed56bd3bc4e3d21f176bf0039\": container with ID starting with 4b1cc554d04b4129951fe98783c73e34a43516fed56bd3bc4e3d21f176bf0039 not found: ID does not exist" containerID="4b1cc554d04b4129951fe98783c73e34a43516fed56bd3bc4e3d21f176bf0039" Dec 15 06:13:49 crc kubenswrapper[4747]: I1215 06:13:49.549350 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b1cc554d04b4129951fe98783c73e34a43516fed56bd3bc4e3d21f176bf0039"} err="failed to get container status \"4b1cc554d04b4129951fe98783c73e34a43516fed56bd3bc4e3d21f176bf0039\": rpc error: code = NotFound desc = could not find container \"4b1cc554d04b4129951fe98783c73e34a43516fed56bd3bc4e3d21f176bf0039\": container with ID starting with 4b1cc554d04b4129951fe98783c73e34a43516fed56bd3bc4e3d21f176bf0039 not found: ID does not exist" Dec 15 06:13:50 crc kubenswrapper[4747]: I1215 06:13:50.640428 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0c808b4-a500-4d73-9e5d-6c100ebecccd" path="/var/lib/kubelet/pods/a0c808b4-a500-4d73-9e5d-6c100ebecccd/volumes" Dec 15 06:13:58 crc kubenswrapper[4747]: I1215 06:13:58.864960 4747 patch_prober.go:28] interesting pod/machine-config-daemon-nldtn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 15 06:13:58 crc kubenswrapper[4747]: I1215 06:13:58.866660 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 15 06:13:58 crc kubenswrapper[4747]: I1215 06:13:58.866788 4747 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" Dec 15 06:13:58 crc kubenswrapper[4747]: I1215 06:13:58.867599 4747 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"792cda812eb2119b7ff0b41b927687bb15e0e2fd42f24ffa26a56782c6542e51"} pod="openshift-machine-config-operator/machine-config-daemon-nldtn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 15 06:13:58 crc kubenswrapper[4747]: I1215 06:13:58.867742 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" containerID="cri-o://792cda812eb2119b7ff0b41b927687bb15e0e2fd42f24ffa26a56782c6542e51" gracePeriod=600 Dec 15 06:13:59 crc kubenswrapper[4747]: I1215 06:13:59.565945 4747 generic.go:334] "Generic (PLEG): container finished" podID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerID="792cda812eb2119b7ff0b41b927687bb15e0e2fd42f24ffa26a56782c6542e51" exitCode=0 Dec 15 06:13:59 crc kubenswrapper[4747]: I1215 06:13:59.566006 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" event={"ID":"1d50e5c9-7ce9-40c0-b942-01031654d27c","Type":"ContainerDied","Data":"792cda812eb2119b7ff0b41b927687bb15e0e2fd42f24ffa26a56782c6542e51"} Dec 15 06:13:59 crc kubenswrapper[4747]: I1215 06:13:59.566265 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" event={"ID":"1d50e5c9-7ce9-40c0-b942-01031654d27c","Type":"ContainerStarted","Data":"9eda37dbf0bf8dcbb67dcb39717ecb7b947d829042887da16cdd4dbcc8944d16"} Dec 15 06:13:59 crc kubenswrapper[4747]: I1215 06:13:59.566298 4747 scope.go:117] "RemoveContainer" containerID="1fb2c7cbf1bffaa65209c28e6e7abe2bf250e060aa38bf5208a3810b074d8610" Dec 15 06:14:28 crc kubenswrapper[4747]: I1215 06:14:28.424969 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Dec 15 06:14:28 crc kubenswrapper[4747]: E1215 06:14:28.426547 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0c808b4-a500-4d73-9e5d-6c100ebecccd" containerName="extract-utilities" Dec 15 06:14:28 crc kubenswrapper[4747]: I1215 06:14:28.426568 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0c808b4-a500-4d73-9e5d-6c100ebecccd" containerName="extract-utilities" Dec 15 06:14:28 crc kubenswrapper[4747]: E1215 06:14:28.426577 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0c808b4-a500-4d73-9e5d-6c100ebecccd" containerName="registry-server" Dec 15 06:14:28 crc kubenswrapper[4747]: I1215 06:14:28.426582 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0c808b4-a500-4d73-9e5d-6c100ebecccd" containerName="registry-server" Dec 15 06:14:28 crc kubenswrapper[4747]: E1215 06:14:28.426602 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0c808b4-a500-4d73-9e5d-6c100ebecccd" containerName="extract-content" Dec 15 06:14:28 crc kubenswrapper[4747]: I1215 06:14:28.426608 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0c808b4-a500-4d73-9e5d-6c100ebecccd" containerName="extract-content" Dec 15 06:14:28 crc kubenswrapper[4747]: I1215 06:14:28.426911 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0c808b4-a500-4d73-9e5d-6c100ebecccd" containerName="registry-server" Dec 15 06:14:28 crc kubenswrapper[4747]: I1215 06:14:28.427913 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 15 06:14:28 crc kubenswrapper[4747]: I1215 06:14:28.431501 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Dec 15 06:14:28 crc kubenswrapper[4747]: I1215 06:14:28.431629 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-fdjw5" Dec 15 06:14:28 crc kubenswrapper[4747]: I1215 06:14:28.431798 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Dec 15 06:14:28 crc kubenswrapper[4747]: I1215 06:14:28.432019 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 15 06:14:28 crc kubenswrapper[4747]: I1215 06:14:28.444058 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 15 06:14:28 crc kubenswrapper[4747]: I1215 06:14:28.444962 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0feaf663-b187-479f-8129-5aa6bf3b9047-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"0feaf663-b187-479f-8129-5aa6bf3b9047\") " pod="openstack/tempest-tests-tempest" Dec 15 06:14:28 crc kubenswrapper[4747]: I1215 06:14:28.445085 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0feaf663-b187-479f-8129-5aa6bf3b9047-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"0feaf663-b187-479f-8129-5aa6bf3b9047\") " pod="openstack/tempest-tests-tempest" Dec 15 06:14:28 crc kubenswrapper[4747]: I1215 06:14:28.445522 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0feaf663-b187-479f-8129-5aa6bf3b9047-config-data\") pod \"tempest-tests-tempest\" (UID: \"0feaf663-b187-479f-8129-5aa6bf3b9047\") " pod="openstack/tempest-tests-tempest" Dec 15 06:14:28 crc kubenswrapper[4747]: I1215 06:14:28.547975 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk7v2\" (UniqueName: \"kubernetes.io/projected/0feaf663-b187-479f-8129-5aa6bf3b9047-kube-api-access-nk7v2\") pod \"tempest-tests-tempest\" (UID: \"0feaf663-b187-479f-8129-5aa6bf3b9047\") " pod="openstack/tempest-tests-tempest" Dec 15 06:14:28 crc kubenswrapper[4747]: I1215 06:14:28.548128 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0feaf663-b187-479f-8129-5aa6bf3b9047-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"0feaf663-b187-479f-8129-5aa6bf3b9047\") " pod="openstack/tempest-tests-tempest" Dec 15 06:14:28 crc kubenswrapper[4747]: I1215 06:14:28.548165 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0feaf663-b187-479f-8129-5aa6bf3b9047-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"0feaf663-b187-479f-8129-5aa6bf3b9047\") " pod="openstack/tempest-tests-tempest" Dec 15 06:14:28 crc kubenswrapper[4747]: I1215 06:14:28.548203 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0feaf663-b187-479f-8129-5aa6bf3b9047-config-data\") pod \"tempest-tests-tempest\" (UID: \"0feaf663-b187-479f-8129-5aa6bf3b9047\") " pod="openstack/tempest-tests-tempest" Dec 15 06:14:28 crc kubenswrapper[4747]: I1215 06:14:28.548301 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0feaf663-b187-479f-8129-5aa6bf3b9047-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"0feaf663-b187-479f-8129-5aa6bf3b9047\") " pod="openstack/tempest-tests-tempest" Dec 15 06:14:28 crc kubenswrapper[4747]: I1215 06:14:28.548379 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0feaf663-b187-479f-8129-5aa6bf3b9047-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"0feaf663-b187-479f-8129-5aa6bf3b9047\") " pod="openstack/tempest-tests-tempest" Dec 15 06:14:28 crc kubenswrapper[4747]: I1215 06:14:28.548442 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0feaf663-b187-479f-8129-5aa6bf3b9047-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"0feaf663-b187-479f-8129-5aa6bf3b9047\") " pod="openstack/tempest-tests-tempest" Dec 15 06:14:28 crc kubenswrapper[4747]: I1215 06:14:28.548500 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"0feaf663-b187-479f-8129-5aa6bf3b9047\") " pod="openstack/tempest-tests-tempest" Dec 15 06:14:28 crc kubenswrapper[4747]: I1215 06:14:28.548560 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0feaf663-b187-479f-8129-5aa6bf3b9047-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"0feaf663-b187-479f-8129-5aa6bf3b9047\") " pod="openstack/tempest-tests-tempest" Dec 15 06:14:28 crc kubenswrapper[4747]: I1215 06:14:28.549389 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0feaf663-b187-479f-8129-5aa6bf3b9047-config-data\") pod \"tempest-tests-tempest\" (UID: \"0feaf663-b187-479f-8129-5aa6bf3b9047\") " pod="openstack/tempest-tests-tempest" Dec 15 06:14:28 crc kubenswrapper[4747]: I1215 06:14:28.549594 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0feaf663-b187-479f-8129-5aa6bf3b9047-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"0feaf663-b187-479f-8129-5aa6bf3b9047\") " pod="openstack/tempest-tests-tempest" Dec 15 06:14:28 crc kubenswrapper[4747]: I1215 06:14:28.556730 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0feaf663-b187-479f-8129-5aa6bf3b9047-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"0feaf663-b187-479f-8129-5aa6bf3b9047\") " pod="openstack/tempest-tests-tempest" Dec 15 06:14:28 crc kubenswrapper[4747]: I1215 06:14:28.650279 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"0feaf663-b187-479f-8129-5aa6bf3b9047\") " pod="openstack/tempest-tests-tempest" Dec 15 06:14:28 crc kubenswrapper[4747]: I1215 06:14:28.650622 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0feaf663-b187-479f-8129-5aa6bf3b9047-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"0feaf663-b187-479f-8129-5aa6bf3b9047\") " pod="openstack/tempest-tests-tempest" Dec 15 06:14:28 crc kubenswrapper[4747]: I1215 06:14:28.650679 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"0feaf663-b187-479f-8129-5aa6bf3b9047\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/tempest-tests-tempest" Dec 15 06:14:28 crc kubenswrapper[4747]: I1215 06:14:28.650873 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk7v2\" (UniqueName: \"kubernetes.io/projected/0feaf663-b187-479f-8129-5aa6bf3b9047-kube-api-access-nk7v2\") pod \"tempest-tests-tempest\" (UID: \"0feaf663-b187-479f-8129-5aa6bf3b9047\") " pod="openstack/tempest-tests-tempest" Dec 15 06:14:28 crc kubenswrapper[4747]: I1215 06:14:28.651013 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0feaf663-b187-479f-8129-5aa6bf3b9047-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"0feaf663-b187-479f-8129-5aa6bf3b9047\") " pod="openstack/tempest-tests-tempest" Dec 15 06:14:28 crc kubenswrapper[4747]: I1215 06:14:28.651069 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0feaf663-b187-479f-8129-5aa6bf3b9047-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"0feaf663-b187-479f-8129-5aa6bf3b9047\") " pod="openstack/tempest-tests-tempest" Dec 15 06:14:28 crc kubenswrapper[4747]: I1215 06:14:28.651146 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0feaf663-b187-479f-8129-5aa6bf3b9047-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"0feaf663-b187-479f-8129-5aa6bf3b9047\") " pod="openstack/tempest-tests-tempest" Dec 15 06:14:28 crc kubenswrapper[4747]: I1215 06:14:28.651767 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0feaf663-b187-479f-8129-5aa6bf3b9047-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"0feaf663-b187-479f-8129-5aa6bf3b9047\") " pod="openstack/tempest-tests-tempest" Dec 15 06:14:28 crc kubenswrapper[4747]: I1215 06:14:28.651985 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0feaf663-b187-479f-8129-5aa6bf3b9047-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"0feaf663-b187-479f-8129-5aa6bf3b9047\") " pod="openstack/tempest-tests-tempest" Dec 15 06:14:28 crc kubenswrapper[4747]: I1215 06:14:28.655515 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0feaf663-b187-479f-8129-5aa6bf3b9047-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"0feaf663-b187-479f-8129-5aa6bf3b9047\") " pod="openstack/tempest-tests-tempest" Dec 15 06:14:28 crc kubenswrapper[4747]: I1215 06:14:28.656262 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0feaf663-b187-479f-8129-5aa6bf3b9047-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"0feaf663-b187-479f-8129-5aa6bf3b9047\") " pod="openstack/tempest-tests-tempest" Dec 15 06:14:28 crc kubenswrapper[4747]: I1215 06:14:28.666584 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk7v2\" (UniqueName: \"kubernetes.io/projected/0feaf663-b187-479f-8129-5aa6bf3b9047-kube-api-access-nk7v2\") pod \"tempest-tests-tempest\" (UID: \"0feaf663-b187-479f-8129-5aa6bf3b9047\") " pod="openstack/tempest-tests-tempest" Dec 15 06:14:28 crc kubenswrapper[4747]: I1215 06:14:28.672798 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"0feaf663-b187-479f-8129-5aa6bf3b9047\") " pod="openstack/tempest-tests-tempest" Dec 15 06:14:28 crc kubenswrapper[4747]: I1215 06:14:28.748819 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 15 06:14:29 crc kubenswrapper[4747]: I1215 06:14:29.172858 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 15 06:14:29 crc kubenswrapper[4747]: W1215 06:14:29.172877 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0feaf663_b187_479f_8129_5aa6bf3b9047.slice/crio-c6e509740af1d4331af655bdf15357ef739ac1c5e40b9056664ed3e99cb15c46 WatchSource:0}: Error finding container c6e509740af1d4331af655bdf15357ef739ac1c5e40b9056664ed3e99cb15c46: Status 404 returned error can't find the container with id c6e509740af1d4331af655bdf15357ef739ac1c5e40b9056664ed3e99cb15c46 Dec 15 06:14:29 crc kubenswrapper[4747]: I1215 06:14:29.837815 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"0feaf663-b187-479f-8129-5aa6bf3b9047","Type":"ContainerStarted","Data":"c6e509740af1d4331af655bdf15357ef739ac1c5e40b9056664ed3e99cb15c46"} Dec 15 06:14:44 crc kubenswrapper[4747]: I1215 06:14:44.992288 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"0feaf663-b187-479f-8129-5aa6bf3b9047","Type":"ContainerStarted","Data":"c651b9f9b46b04dc8f22725cc433a185fffb5dc7c7926f207814eedd8a92eb3e"} Dec 15 06:14:45 crc kubenswrapper[4747]: I1215 06:14:45.039514 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=2.908800606 podStartE2EDuration="18.0394916s" podCreationTimestamp="2025-12-15 06:14:27 +0000 UTC" firstStartedPulling="2025-12-15 06:14:29.179704128 +0000 UTC m=+2232.876216045" lastFinishedPulling="2025-12-15 06:14:44.310395122 +0000 UTC m=+2248.006907039" observedRunningTime="2025-12-15 06:14:45.035975358 +0000 UTC m=+2248.732487275" watchObservedRunningTime="2025-12-15 06:14:45.0394916 +0000 UTC m=+2248.736003517" Dec 15 06:15:00 crc kubenswrapper[4747]: I1215 06:15:00.140206 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29429655-bqxrg"] Dec 15 06:15:00 crc kubenswrapper[4747]: I1215 06:15:00.141914 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29429655-bqxrg" Dec 15 06:15:00 crc kubenswrapper[4747]: I1215 06:15:00.143513 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 15 06:15:00 crc kubenswrapper[4747]: I1215 06:15:00.143746 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 15 06:15:00 crc kubenswrapper[4747]: I1215 06:15:00.169716 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29429655-bqxrg"] Dec 15 06:15:00 crc kubenswrapper[4747]: I1215 06:15:00.250055 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgv9b\" (UniqueName: \"kubernetes.io/projected/ecab19d4-0f98-436c-85ab-dd26b24a3374-kube-api-access-fgv9b\") pod \"collect-profiles-29429655-bqxrg\" (UID: \"ecab19d4-0f98-436c-85ab-dd26b24a3374\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29429655-bqxrg" Dec 15 06:15:00 crc kubenswrapper[4747]: I1215 06:15:00.250161 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ecab19d4-0f98-436c-85ab-dd26b24a3374-secret-volume\") pod \"collect-profiles-29429655-bqxrg\" (UID: \"ecab19d4-0f98-436c-85ab-dd26b24a3374\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29429655-bqxrg" Dec 15 06:15:00 crc kubenswrapper[4747]: I1215 06:15:00.250525 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ecab19d4-0f98-436c-85ab-dd26b24a3374-config-volume\") pod \"collect-profiles-29429655-bqxrg\" (UID: \"ecab19d4-0f98-436c-85ab-dd26b24a3374\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29429655-bqxrg" Dec 15 06:15:00 crc kubenswrapper[4747]: I1215 06:15:00.352246 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ecab19d4-0f98-436c-85ab-dd26b24a3374-secret-volume\") pod \"collect-profiles-29429655-bqxrg\" (UID: \"ecab19d4-0f98-436c-85ab-dd26b24a3374\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29429655-bqxrg" Dec 15 06:15:00 crc kubenswrapper[4747]: I1215 06:15:00.352310 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ecab19d4-0f98-436c-85ab-dd26b24a3374-config-volume\") pod \"collect-profiles-29429655-bqxrg\" (UID: \"ecab19d4-0f98-436c-85ab-dd26b24a3374\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29429655-bqxrg" Dec 15 06:15:00 crc kubenswrapper[4747]: I1215 06:15:00.352443 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgv9b\" (UniqueName: \"kubernetes.io/projected/ecab19d4-0f98-436c-85ab-dd26b24a3374-kube-api-access-fgv9b\") pod \"collect-profiles-29429655-bqxrg\" (UID: \"ecab19d4-0f98-436c-85ab-dd26b24a3374\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29429655-bqxrg" Dec 15 06:15:00 crc kubenswrapper[4747]: I1215 06:15:00.353287 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ecab19d4-0f98-436c-85ab-dd26b24a3374-config-volume\") pod \"collect-profiles-29429655-bqxrg\" (UID: \"ecab19d4-0f98-436c-85ab-dd26b24a3374\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29429655-bqxrg" Dec 15 06:15:00 crc kubenswrapper[4747]: I1215 06:15:00.358379 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ecab19d4-0f98-436c-85ab-dd26b24a3374-secret-volume\") pod \"collect-profiles-29429655-bqxrg\" (UID: \"ecab19d4-0f98-436c-85ab-dd26b24a3374\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29429655-bqxrg" Dec 15 06:15:00 crc kubenswrapper[4747]: I1215 06:15:00.374099 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgv9b\" (UniqueName: \"kubernetes.io/projected/ecab19d4-0f98-436c-85ab-dd26b24a3374-kube-api-access-fgv9b\") pod \"collect-profiles-29429655-bqxrg\" (UID: \"ecab19d4-0f98-436c-85ab-dd26b24a3374\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29429655-bqxrg" Dec 15 06:15:00 crc kubenswrapper[4747]: I1215 06:15:00.465164 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29429655-bqxrg" Dec 15 06:15:00 crc kubenswrapper[4747]: I1215 06:15:00.887786 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29429655-bqxrg"] Dec 15 06:15:00 crc kubenswrapper[4747]: W1215 06:15:00.897526 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podecab19d4_0f98_436c_85ab_dd26b24a3374.slice/crio-9616580b7186ca4786c78fbc8148216c79cae1a2058c39a0d8d1322f3740bb81 WatchSource:0}: Error finding container 9616580b7186ca4786c78fbc8148216c79cae1a2058c39a0d8d1322f3740bb81: Status 404 returned error can't find the container with id 9616580b7186ca4786c78fbc8148216c79cae1a2058c39a0d8d1322f3740bb81 Dec 15 06:15:01 crc kubenswrapper[4747]: I1215 06:15:01.169124 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29429655-bqxrg" event={"ID":"ecab19d4-0f98-436c-85ab-dd26b24a3374","Type":"ContainerStarted","Data":"90b1947dbf074fe8a35a7c6ea8478634be56554ae14f564bb187603f7d8d9422"} Dec 15 06:15:01 crc kubenswrapper[4747]: I1215 06:15:01.169186 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29429655-bqxrg" event={"ID":"ecab19d4-0f98-436c-85ab-dd26b24a3374","Type":"ContainerStarted","Data":"9616580b7186ca4786c78fbc8148216c79cae1a2058c39a0d8d1322f3740bb81"} Dec 15 06:15:01 crc kubenswrapper[4747]: I1215 06:15:01.193552 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29429655-bqxrg" podStartSLOduration=1.193532255 podStartE2EDuration="1.193532255s" podCreationTimestamp="2025-12-15 06:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 06:15:01.184149093 +0000 UTC m=+2264.880661011" watchObservedRunningTime="2025-12-15 06:15:01.193532255 +0000 UTC m=+2264.890044173" Dec 15 06:15:02 crc kubenswrapper[4747]: I1215 06:15:02.194477 4747 generic.go:334] "Generic (PLEG): container finished" podID="ecab19d4-0f98-436c-85ab-dd26b24a3374" containerID="90b1947dbf074fe8a35a7c6ea8478634be56554ae14f564bb187603f7d8d9422" exitCode=0 Dec 15 06:15:02 crc kubenswrapper[4747]: I1215 06:15:02.195489 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29429655-bqxrg" event={"ID":"ecab19d4-0f98-436c-85ab-dd26b24a3374","Type":"ContainerDied","Data":"90b1947dbf074fe8a35a7c6ea8478634be56554ae14f564bb187603f7d8d9422"} Dec 15 06:15:03 crc kubenswrapper[4747]: I1215 06:15:03.492345 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29429655-bqxrg" Dec 15 06:15:03 crc kubenswrapper[4747]: I1215 06:15:03.622231 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ecab19d4-0f98-436c-85ab-dd26b24a3374-secret-volume\") pod \"ecab19d4-0f98-436c-85ab-dd26b24a3374\" (UID: \"ecab19d4-0f98-436c-85ab-dd26b24a3374\") " Dec 15 06:15:03 crc kubenswrapper[4747]: I1215 06:15:03.623371 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgv9b\" (UniqueName: \"kubernetes.io/projected/ecab19d4-0f98-436c-85ab-dd26b24a3374-kube-api-access-fgv9b\") pod \"ecab19d4-0f98-436c-85ab-dd26b24a3374\" (UID: \"ecab19d4-0f98-436c-85ab-dd26b24a3374\") " Dec 15 06:15:03 crc kubenswrapper[4747]: I1215 06:15:03.623593 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ecab19d4-0f98-436c-85ab-dd26b24a3374-config-volume\") pod \"ecab19d4-0f98-436c-85ab-dd26b24a3374\" (UID: \"ecab19d4-0f98-436c-85ab-dd26b24a3374\") " Dec 15 06:15:03 crc kubenswrapper[4747]: I1215 06:15:03.624521 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecab19d4-0f98-436c-85ab-dd26b24a3374-config-volume" (OuterVolumeSpecName: "config-volume") pod "ecab19d4-0f98-436c-85ab-dd26b24a3374" (UID: "ecab19d4-0f98-436c-85ab-dd26b24a3374"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 06:15:03 crc kubenswrapper[4747]: I1215 06:15:03.631527 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecab19d4-0f98-436c-85ab-dd26b24a3374-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ecab19d4-0f98-436c-85ab-dd26b24a3374" (UID: "ecab19d4-0f98-436c-85ab-dd26b24a3374"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:15:03 crc kubenswrapper[4747]: I1215 06:15:03.631776 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecab19d4-0f98-436c-85ab-dd26b24a3374-kube-api-access-fgv9b" (OuterVolumeSpecName: "kube-api-access-fgv9b") pod "ecab19d4-0f98-436c-85ab-dd26b24a3374" (UID: "ecab19d4-0f98-436c-85ab-dd26b24a3374"). InnerVolumeSpecName "kube-api-access-fgv9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 06:15:03 crc kubenswrapper[4747]: I1215 06:15:03.730193 4747 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ecab19d4-0f98-436c-85ab-dd26b24a3374-config-volume\") on node \"crc\" DevicePath \"\"" Dec 15 06:15:03 crc kubenswrapper[4747]: I1215 06:15:03.730222 4747 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ecab19d4-0f98-436c-85ab-dd26b24a3374-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 15 06:15:03 crc kubenswrapper[4747]: I1215 06:15:03.730258 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgv9b\" (UniqueName: \"kubernetes.io/projected/ecab19d4-0f98-436c-85ab-dd26b24a3374-kube-api-access-fgv9b\") on node \"crc\" DevicePath \"\"" Dec 15 06:15:04 crc kubenswrapper[4747]: I1215 06:15:04.227543 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29429655-bqxrg" event={"ID":"ecab19d4-0f98-436c-85ab-dd26b24a3374","Type":"ContainerDied","Data":"9616580b7186ca4786c78fbc8148216c79cae1a2058c39a0d8d1322f3740bb81"} Dec 15 06:15:04 crc kubenswrapper[4747]: I1215 06:15:04.227968 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9616580b7186ca4786c78fbc8148216c79cae1a2058c39a0d8d1322f3740bb81" Dec 15 06:15:04 crc kubenswrapper[4747]: I1215 06:15:04.227624 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29429655-bqxrg" Dec 15 06:15:04 crc kubenswrapper[4747]: I1215 06:15:04.266768 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29429610-bgnz8"] Dec 15 06:15:04 crc kubenswrapper[4747]: I1215 06:15:04.272616 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29429610-bgnz8"] Dec 15 06:15:04 crc kubenswrapper[4747]: I1215 06:15:04.641328 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebb703f6-cf81-4ee4-9ec7-d58fa71e7f62" path="/var/lib/kubelet/pods/ebb703f6-cf81-4ee4-9ec7-d58fa71e7f62/volumes" Dec 15 06:15:33 crc kubenswrapper[4747]: I1215 06:15:33.567696 4747 scope.go:117] "RemoveContainer" containerID="39ca4119249466f267e77a7b4e3a28afc05d59998b059bf159bd96dbaeb55362" Dec 15 06:16:28 crc kubenswrapper[4747]: I1215 06:16:28.864825 4747 patch_prober.go:28] interesting pod/machine-config-daemon-nldtn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 15 06:16:28 crc kubenswrapper[4747]: I1215 06:16:28.866607 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 15 06:16:58 crc kubenswrapper[4747]: I1215 06:16:58.865553 4747 patch_prober.go:28] interesting pod/machine-config-daemon-nldtn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 15 06:16:58 crc kubenswrapper[4747]: I1215 06:16:58.866223 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 15 06:17:23 crc kubenswrapper[4747]: I1215 06:17:23.425980 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b9wbh"] Dec 15 06:17:23 crc kubenswrapper[4747]: E1215 06:17:23.426898 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecab19d4-0f98-436c-85ab-dd26b24a3374" containerName="collect-profiles" Dec 15 06:17:23 crc kubenswrapper[4747]: I1215 06:17:23.426911 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecab19d4-0f98-436c-85ab-dd26b24a3374" containerName="collect-profiles" Dec 15 06:17:23 crc kubenswrapper[4747]: I1215 06:17:23.428468 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecab19d4-0f98-436c-85ab-dd26b24a3374" containerName="collect-profiles" Dec 15 06:17:23 crc kubenswrapper[4747]: I1215 06:17:23.429716 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b9wbh" Dec 15 06:17:23 crc kubenswrapper[4747]: I1215 06:17:23.446309 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b9wbh"] Dec 15 06:17:23 crc kubenswrapper[4747]: I1215 06:17:23.512710 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/081aa26a-2f36-44aa-8946-66205ad5c16d-utilities\") pod \"certified-operators-b9wbh\" (UID: \"081aa26a-2f36-44aa-8946-66205ad5c16d\") " pod="openshift-marketplace/certified-operators-b9wbh" Dec 15 06:17:23 crc kubenswrapper[4747]: I1215 06:17:23.513092 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mskz\" (UniqueName: \"kubernetes.io/projected/081aa26a-2f36-44aa-8946-66205ad5c16d-kube-api-access-6mskz\") pod \"certified-operators-b9wbh\" (UID: \"081aa26a-2f36-44aa-8946-66205ad5c16d\") " pod="openshift-marketplace/certified-operators-b9wbh" Dec 15 06:17:23 crc kubenswrapper[4747]: I1215 06:17:23.513308 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/081aa26a-2f36-44aa-8946-66205ad5c16d-catalog-content\") pod \"certified-operators-b9wbh\" (UID: \"081aa26a-2f36-44aa-8946-66205ad5c16d\") " pod="openshift-marketplace/certified-operators-b9wbh" Dec 15 06:17:23 crc kubenswrapper[4747]: I1215 06:17:23.615569 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mskz\" (UniqueName: \"kubernetes.io/projected/081aa26a-2f36-44aa-8946-66205ad5c16d-kube-api-access-6mskz\") pod \"certified-operators-b9wbh\" (UID: \"081aa26a-2f36-44aa-8946-66205ad5c16d\") " pod="openshift-marketplace/certified-operators-b9wbh" Dec 15 06:17:23 crc kubenswrapper[4747]: I1215 06:17:23.615892 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/081aa26a-2f36-44aa-8946-66205ad5c16d-catalog-content\") pod \"certified-operators-b9wbh\" (UID: \"081aa26a-2f36-44aa-8946-66205ad5c16d\") " pod="openshift-marketplace/certified-operators-b9wbh" Dec 15 06:17:23 crc kubenswrapper[4747]: I1215 06:17:23.616084 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/081aa26a-2f36-44aa-8946-66205ad5c16d-utilities\") pod \"certified-operators-b9wbh\" (UID: \"081aa26a-2f36-44aa-8946-66205ad5c16d\") " pod="openshift-marketplace/certified-operators-b9wbh" Dec 15 06:17:23 crc kubenswrapper[4747]: I1215 06:17:23.616494 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/081aa26a-2f36-44aa-8946-66205ad5c16d-catalog-content\") pod \"certified-operators-b9wbh\" (UID: \"081aa26a-2f36-44aa-8946-66205ad5c16d\") " pod="openshift-marketplace/certified-operators-b9wbh" Dec 15 06:17:23 crc kubenswrapper[4747]: I1215 06:17:23.616517 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/081aa26a-2f36-44aa-8946-66205ad5c16d-utilities\") pod \"certified-operators-b9wbh\" (UID: \"081aa26a-2f36-44aa-8946-66205ad5c16d\") " pod="openshift-marketplace/certified-operators-b9wbh" Dec 15 06:17:23 crc kubenswrapper[4747]: I1215 06:17:23.635806 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mskz\" (UniqueName: \"kubernetes.io/projected/081aa26a-2f36-44aa-8946-66205ad5c16d-kube-api-access-6mskz\") pod \"certified-operators-b9wbh\" (UID: \"081aa26a-2f36-44aa-8946-66205ad5c16d\") " pod="openshift-marketplace/certified-operators-b9wbh" Dec 15 06:17:23 crc kubenswrapper[4747]: I1215 06:17:23.748502 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b9wbh" Dec 15 06:17:24 crc kubenswrapper[4747]: I1215 06:17:24.186577 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b9wbh"] Dec 15 06:17:24 crc kubenswrapper[4747]: I1215 06:17:24.486298 4747 generic.go:334] "Generic (PLEG): container finished" podID="081aa26a-2f36-44aa-8946-66205ad5c16d" containerID="ae9c57ad740e20e416ea90070efbe84c9dc577ac0663cc7faa86ba5ec31dbe37" exitCode=0 Dec 15 06:17:24 crc kubenswrapper[4747]: I1215 06:17:24.486399 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b9wbh" event={"ID":"081aa26a-2f36-44aa-8946-66205ad5c16d","Type":"ContainerDied","Data":"ae9c57ad740e20e416ea90070efbe84c9dc577ac0663cc7faa86ba5ec31dbe37"} Dec 15 06:17:24 crc kubenswrapper[4747]: I1215 06:17:24.486697 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b9wbh" event={"ID":"081aa26a-2f36-44aa-8946-66205ad5c16d","Type":"ContainerStarted","Data":"771e7282adf97db935dc76783abf978dc6b54f099b4b9650a050f12182bc6f47"} Dec 15 06:17:25 crc kubenswrapper[4747]: I1215 06:17:25.495148 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b9wbh" event={"ID":"081aa26a-2f36-44aa-8946-66205ad5c16d","Type":"ContainerStarted","Data":"bd83d9fb7df3dda617eb07ad95a8bdbcd3947885bd432923edd263579a7ca6dc"} Dec 15 06:17:26 crc kubenswrapper[4747]: I1215 06:17:26.508145 4747 generic.go:334] "Generic (PLEG): container finished" podID="081aa26a-2f36-44aa-8946-66205ad5c16d" containerID="bd83d9fb7df3dda617eb07ad95a8bdbcd3947885bd432923edd263579a7ca6dc" exitCode=0 Dec 15 06:17:26 crc kubenswrapper[4747]: I1215 06:17:26.508220 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b9wbh" event={"ID":"081aa26a-2f36-44aa-8946-66205ad5c16d","Type":"ContainerDied","Data":"bd83d9fb7df3dda617eb07ad95a8bdbcd3947885bd432923edd263579a7ca6dc"} Dec 15 06:17:27 crc kubenswrapper[4747]: I1215 06:17:27.520683 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b9wbh" event={"ID":"081aa26a-2f36-44aa-8946-66205ad5c16d","Type":"ContainerStarted","Data":"5341543c7db5fc4d4c552e5ba204b72ae536ae6088e74c14201b4f79d6c1c856"} Dec 15 06:17:27 crc kubenswrapper[4747]: I1215 06:17:27.538737 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b9wbh" podStartSLOduration=1.960709446 podStartE2EDuration="4.538713034s" podCreationTimestamp="2025-12-15 06:17:23 +0000 UTC" firstStartedPulling="2025-12-15 06:17:24.488757704 +0000 UTC m=+2408.185269620" lastFinishedPulling="2025-12-15 06:17:27.066761292 +0000 UTC m=+2410.763273208" observedRunningTime="2025-12-15 06:17:27.538042583 +0000 UTC m=+2411.234554501" watchObservedRunningTime="2025-12-15 06:17:27.538713034 +0000 UTC m=+2411.235224950" Dec 15 06:17:28 crc kubenswrapper[4747]: I1215 06:17:28.865408 4747 patch_prober.go:28] interesting pod/machine-config-daemon-nldtn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 15 06:17:28 crc kubenswrapper[4747]: I1215 06:17:28.865806 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 15 06:17:28 crc kubenswrapper[4747]: I1215 06:17:28.865866 4747 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" Dec 15 06:17:28 crc kubenswrapper[4747]: I1215 06:17:28.866449 4747 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9eda37dbf0bf8dcbb67dcb39717ecb7b947d829042887da16cdd4dbcc8944d16"} pod="openshift-machine-config-operator/machine-config-daemon-nldtn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 15 06:17:28 crc kubenswrapper[4747]: I1215 06:17:28.866519 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" containerID="cri-o://9eda37dbf0bf8dcbb67dcb39717ecb7b947d829042887da16cdd4dbcc8944d16" gracePeriod=600 Dec 15 06:17:29 crc kubenswrapper[4747]: E1215 06:17:29.007327 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:17:29 crc kubenswrapper[4747]: I1215 06:17:29.539590 4747 generic.go:334] "Generic (PLEG): container finished" podID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerID="9eda37dbf0bf8dcbb67dcb39717ecb7b947d829042887da16cdd4dbcc8944d16" exitCode=0 Dec 15 06:17:29 crc kubenswrapper[4747]: I1215 06:17:29.539688 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" event={"ID":"1d50e5c9-7ce9-40c0-b942-01031654d27c","Type":"ContainerDied","Data":"9eda37dbf0bf8dcbb67dcb39717ecb7b947d829042887da16cdd4dbcc8944d16"} Dec 15 06:17:29 crc kubenswrapper[4747]: I1215 06:17:29.540065 4747 scope.go:117] "RemoveContainer" containerID="792cda812eb2119b7ff0b41b927687bb15e0e2fd42f24ffa26a56782c6542e51" Dec 15 06:17:29 crc kubenswrapper[4747]: I1215 06:17:29.540997 4747 scope.go:117] "RemoveContainer" containerID="9eda37dbf0bf8dcbb67dcb39717ecb7b947d829042887da16cdd4dbcc8944d16" Dec 15 06:17:29 crc kubenswrapper[4747]: E1215 06:17:29.541339 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:17:33 crc kubenswrapper[4747]: I1215 06:17:33.749600 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b9wbh" Dec 15 06:17:33 crc kubenswrapper[4747]: I1215 06:17:33.751337 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-b9wbh" Dec 15 06:17:33 crc kubenswrapper[4747]: I1215 06:17:33.790603 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b9wbh" Dec 15 06:17:34 crc kubenswrapper[4747]: I1215 06:17:34.641691 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b9wbh" Dec 15 06:17:34 crc kubenswrapper[4747]: I1215 06:17:34.693015 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b9wbh"] Dec 15 06:17:36 crc kubenswrapper[4747]: I1215 06:17:36.620688 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-b9wbh" podUID="081aa26a-2f36-44aa-8946-66205ad5c16d" containerName="registry-server" containerID="cri-o://5341543c7db5fc4d4c552e5ba204b72ae536ae6088e74c14201b4f79d6c1c856" gracePeriod=2 Dec 15 06:17:37 crc kubenswrapper[4747]: I1215 06:17:37.053058 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b9wbh" Dec 15 06:17:37 crc kubenswrapper[4747]: I1215 06:17:37.141109 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/081aa26a-2f36-44aa-8946-66205ad5c16d-catalog-content\") pod \"081aa26a-2f36-44aa-8946-66205ad5c16d\" (UID: \"081aa26a-2f36-44aa-8946-66205ad5c16d\") " Dec 15 06:17:37 crc kubenswrapper[4747]: I1215 06:17:37.141170 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mskz\" (UniqueName: \"kubernetes.io/projected/081aa26a-2f36-44aa-8946-66205ad5c16d-kube-api-access-6mskz\") pod \"081aa26a-2f36-44aa-8946-66205ad5c16d\" (UID: \"081aa26a-2f36-44aa-8946-66205ad5c16d\") " Dec 15 06:17:37 crc kubenswrapper[4747]: I1215 06:17:37.141200 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/081aa26a-2f36-44aa-8946-66205ad5c16d-utilities\") pod \"081aa26a-2f36-44aa-8946-66205ad5c16d\" (UID: \"081aa26a-2f36-44aa-8946-66205ad5c16d\") " Dec 15 06:17:37 crc kubenswrapper[4747]: I1215 06:17:37.142158 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/081aa26a-2f36-44aa-8946-66205ad5c16d-utilities" (OuterVolumeSpecName: "utilities") pod "081aa26a-2f36-44aa-8946-66205ad5c16d" (UID: "081aa26a-2f36-44aa-8946-66205ad5c16d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 06:17:37 crc kubenswrapper[4747]: I1215 06:17:37.146284 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/081aa26a-2f36-44aa-8946-66205ad5c16d-kube-api-access-6mskz" (OuterVolumeSpecName: "kube-api-access-6mskz") pod "081aa26a-2f36-44aa-8946-66205ad5c16d" (UID: "081aa26a-2f36-44aa-8946-66205ad5c16d"). InnerVolumeSpecName "kube-api-access-6mskz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 06:17:37 crc kubenswrapper[4747]: I1215 06:17:37.174590 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/081aa26a-2f36-44aa-8946-66205ad5c16d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "081aa26a-2f36-44aa-8946-66205ad5c16d" (UID: "081aa26a-2f36-44aa-8946-66205ad5c16d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 06:17:37 crc kubenswrapper[4747]: I1215 06:17:37.243356 4747 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/081aa26a-2f36-44aa-8946-66205ad5c16d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 15 06:17:37 crc kubenswrapper[4747]: I1215 06:17:37.243387 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mskz\" (UniqueName: \"kubernetes.io/projected/081aa26a-2f36-44aa-8946-66205ad5c16d-kube-api-access-6mskz\") on node \"crc\" DevicePath \"\"" Dec 15 06:17:37 crc kubenswrapper[4747]: I1215 06:17:37.243400 4747 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/081aa26a-2f36-44aa-8946-66205ad5c16d-utilities\") on node \"crc\" DevicePath \"\"" Dec 15 06:17:37 crc kubenswrapper[4747]: I1215 06:17:37.633554 4747 generic.go:334] "Generic (PLEG): container finished" podID="081aa26a-2f36-44aa-8946-66205ad5c16d" containerID="5341543c7db5fc4d4c552e5ba204b72ae536ae6088e74c14201b4f79d6c1c856" exitCode=0 Dec 15 06:17:37 crc kubenswrapper[4747]: I1215 06:17:37.633856 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b9wbh" event={"ID":"081aa26a-2f36-44aa-8946-66205ad5c16d","Type":"ContainerDied","Data":"5341543c7db5fc4d4c552e5ba204b72ae536ae6088e74c14201b4f79d6c1c856"} Dec 15 06:17:37 crc kubenswrapper[4747]: I1215 06:17:37.633890 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b9wbh" event={"ID":"081aa26a-2f36-44aa-8946-66205ad5c16d","Type":"ContainerDied","Data":"771e7282adf97db935dc76783abf978dc6b54f099b4b9650a050f12182bc6f47"} Dec 15 06:17:37 crc kubenswrapper[4747]: I1215 06:17:37.633911 4747 scope.go:117] "RemoveContainer" containerID="5341543c7db5fc4d4c552e5ba204b72ae536ae6088e74c14201b4f79d6c1c856" Dec 15 06:17:37 crc kubenswrapper[4747]: I1215 06:17:37.634103 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b9wbh" Dec 15 06:17:37 crc kubenswrapper[4747]: I1215 06:17:37.651066 4747 scope.go:117] "RemoveContainer" containerID="bd83d9fb7df3dda617eb07ad95a8bdbcd3947885bd432923edd263579a7ca6dc" Dec 15 06:17:37 crc kubenswrapper[4747]: I1215 06:17:37.663944 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b9wbh"] Dec 15 06:17:37 crc kubenswrapper[4747]: I1215 06:17:37.670822 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-b9wbh"] Dec 15 06:17:37 crc kubenswrapper[4747]: I1215 06:17:37.676988 4747 scope.go:117] "RemoveContainer" containerID="ae9c57ad740e20e416ea90070efbe84c9dc577ac0663cc7faa86ba5ec31dbe37" Dec 15 06:17:37 crc kubenswrapper[4747]: I1215 06:17:37.702207 4747 scope.go:117] "RemoveContainer" containerID="5341543c7db5fc4d4c552e5ba204b72ae536ae6088e74c14201b4f79d6c1c856" Dec 15 06:17:37 crc kubenswrapper[4747]: E1215 06:17:37.702598 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5341543c7db5fc4d4c552e5ba204b72ae536ae6088e74c14201b4f79d6c1c856\": container with ID starting with 5341543c7db5fc4d4c552e5ba204b72ae536ae6088e74c14201b4f79d6c1c856 not found: ID does not exist" containerID="5341543c7db5fc4d4c552e5ba204b72ae536ae6088e74c14201b4f79d6c1c856" Dec 15 06:17:37 crc kubenswrapper[4747]: I1215 06:17:37.702679 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5341543c7db5fc4d4c552e5ba204b72ae536ae6088e74c14201b4f79d6c1c856"} err="failed to get container status \"5341543c7db5fc4d4c552e5ba204b72ae536ae6088e74c14201b4f79d6c1c856\": rpc error: code = NotFound desc = could not find container \"5341543c7db5fc4d4c552e5ba204b72ae536ae6088e74c14201b4f79d6c1c856\": container with ID starting with 5341543c7db5fc4d4c552e5ba204b72ae536ae6088e74c14201b4f79d6c1c856 not found: ID does not exist" Dec 15 06:17:37 crc kubenswrapper[4747]: I1215 06:17:37.702755 4747 scope.go:117] "RemoveContainer" containerID="bd83d9fb7df3dda617eb07ad95a8bdbcd3947885bd432923edd263579a7ca6dc" Dec 15 06:17:37 crc kubenswrapper[4747]: E1215 06:17:37.703175 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd83d9fb7df3dda617eb07ad95a8bdbcd3947885bd432923edd263579a7ca6dc\": container with ID starting with bd83d9fb7df3dda617eb07ad95a8bdbcd3947885bd432923edd263579a7ca6dc not found: ID does not exist" containerID="bd83d9fb7df3dda617eb07ad95a8bdbcd3947885bd432923edd263579a7ca6dc" Dec 15 06:17:37 crc kubenswrapper[4747]: I1215 06:17:37.703197 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd83d9fb7df3dda617eb07ad95a8bdbcd3947885bd432923edd263579a7ca6dc"} err="failed to get container status \"bd83d9fb7df3dda617eb07ad95a8bdbcd3947885bd432923edd263579a7ca6dc\": rpc error: code = NotFound desc = could not find container \"bd83d9fb7df3dda617eb07ad95a8bdbcd3947885bd432923edd263579a7ca6dc\": container with ID starting with bd83d9fb7df3dda617eb07ad95a8bdbcd3947885bd432923edd263579a7ca6dc not found: ID does not exist" Dec 15 06:17:37 crc kubenswrapper[4747]: I1215 06:17:37.703212 4747 scope.go:117] "RemoveContainer" containerID="ae9c57ad740e20e416ea90070efbe84c9dc577ac0663cc7faa86ba5ec31dbe37" Dec 15 06:17:37 crc kubenswrapper[4747]: E1215 06:17:37.703628 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae9c57ad740e20e416ea90070efbe84c9dc577ac0663cc7faa86ba5ec31dbe37\": container with ID starting with ae9c57ad740e20e416ea90070efbe84c9dc577ac0663cc7faa86ba5ec31dbe37 not found: ID does not exist" containerID="ae9c57ad740e20e416ea90070efbe84c9dc577ac0663cc7faa86ba5ec31dbe37" Dec 15 06:17:37 crc kubenswrapper[4747]: I1215 06:17:37.703673 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae9c57ad740e20e416ea90070efbe84c9dc577ac0663cc7faa86ba5ec31dbe37"} err="failed to get container status \"ae9c57ad740e20e416ea90070efbe84c9dc577ac0663cc7faa86ba5ec31dbe37\": rpc error: code = NotFound desc = could not find container \"ae9c57ad740e20e416ea90070efbe84c9dc577ac0663cc7faa86ba5ec31dbe37\": container with ID starting with ae9c57ad740e20e416ea90070efbe84c9dc577ac0663cc7faa86ba5ec31dbe37 not found: ID does not exist" Dec 15 06:17:38 crc kubenswrapper[4747]: I1215 06:17:38.641058 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="081aa26a-2f36-44aa-8946-66205ad5c16d" path="/var/lib/kubelet/pods/081aa26a-2f36-44aa-8946-66205ad5c16d/volumes" Dec 15 06:17:40 crc kubenswrapper[4747]: I1215 06:17:40.629649 4747 scope.go:117] "RemoveContainer" containerID="9eda37dbf0bf8dcbb67dcb39717ecb7b947d829042887da16cdd4dbcc8944d16" Dec 15 06:17:40 crc kubenswrapper[4747]: E1215 06:17:40.630247 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:17:55 crc kubenswrapper[4747]: I1215 06:17:55.630406 4747 scope.go:117] "RemoveContainer" containerID="9eda37dbf0bf8dcbb67dcb39717ecb7b947d829042887da16cdd4dbcc8944d16" Dec 15 06:17:55 crc kubenswrapper[4747]: E1215 06:17:55.631569 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:18:09 crc kubenswrapper[4747]: I1215 06:18:09.630007 4747 scope.go:117] "RemoveContainer" containerID="9eda37dbf0bf8dcbb67dcb39717ecb7b947d829042887da16cdd4dbcc8944d16" Dec 15 06:18:09 crc kubenswrapper[4747]: E1215 06:18:09.630896 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:18:24 crc kubenswrapper[4747]: I1215 06:18:24.629674 4747 scope.go:117] "RemoveContainer" containerID="9eda37dbf0bf8dcbb67dcb39717ecb7b947d829042887da16cdd4dbcc8944d16" Dec 15 06:18:24 crc kubenswrapper[4747]: E1215 06:18:24.630751 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:18:26 crc kubenswrapper[4747]: I1215 06:18:26.684683 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hc2pq"] Dec 15 06:18:26 crc kubenswrapper[4747]: E1215 06:18:26.685437 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="081aa26a-2f36-44aa-8946-66205ad5c16d" containerName="registry-server" Dec 15 06:18:26 crc kubenswrapper[4747]: I1215 06:18:26.685453 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="081aa26a-2f36-44aa-8946-66205ad5c16d" containerName="registry-server" Dec 15 06:18:26 crc kubenswrapper[4747]: E1215 06:18:26.685501 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="081aa26a-2f36-44aa-8946-66205ad5c16d" containerName="extract-content" Dec 15 06:18:26 crc kubenswrapper[4747]: I1215 06:18:26.685507 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="081aa26a-2f36-44aa-8946-66205ad5c16d" containerName="extract-content" Dec 15 06:18:26 crc kubenswrapper[4747]: E1215 06:18:26.685536 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="081aa26a-2f36-44aa-8946-66205ad5c16d" containerName="extract-utilities" Dec 15 06:18:26 crc kubenswrapper[4747]: I1215 06:18:26.685544 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="081aa26a-2f36-44aa-8946-66205ad5c16d" containerName="extract-utilities" Dec 15 06:18:26 crc kubenswrapper[4747]: I1215 06:18:26.685737 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="081aa26a-2f36-44aa-8946-66205ad5c16d" containerName="registry-server" Dec 15 06:18:26 crc kubenswrapper[4747]: I1215 06:18:26.687060 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hc2pq" Dec 15 06:18:26 crc kubenswrapper[4747]: I1215 06:18:26.709686 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hc2pq"] Dec 15 06:18:26 crc kubenswrapper[4747]: I1215 06:18:26.769891 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cfb7053-fbb3-4e63-b51b-1d17353d1b6d-utilities\") pod \"redhat-operators-hc2pq\" (UID: \"1cfb7053-fbb3-4e63-b51b-1d17353d1b6d\") " pod="openshift-marketplace/redhat-operators-hc2pq" Dec 15 06:18:26 crc kubenswrapper[4747]: I1215 06:18:26.770029 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtj24\" (UniqueName: \"kubernetes.io/projected/1cfb7053-fbb3-4e63-b51b-1d17353d1b6d-kube-api-access-jtj24\") pod \"redhat-operators-hc2pq\" (UID: \"1cfb7053-fbb3-4e63-b51b-1d17353d1b6d\") " pod="openshift-marketplace/redhat-operators-hc2pq" Dec 15 06:18:26 crc kubenswrapper[4747]: I1215 06:18:26.770292 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cfb7053-fbb3-4e63-b51b-1d17353d1b6d-catalog-content\") pod \"redhat-operators-hc2pq\" (UID: \"1cfb7053-fbb3-4e63-b51b-1d17353d1b6d\") " pod="openshift-marketplace/redhat-operators-hc2pq" Dec 15 06:18:26 crc kubenswrapper[4747]: I1215 06:18:26.872276 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cfb7053-fbb3-4e63-b51b-1d17353d1b6d-catalog-content\") pod \"redhat-operators-hc2pq\" (UID: \"1cfb7053-fbb3-4e63-b51b-1d17353d1b6d\") " pod="openshift-marketplace/redhat-operators-hc2pq" Dec 15 06:18:26 crc kubenswrapper[4747]: I1215 06:18:26.872360 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cfb7053-fbb3-4e63-b51b-1d17353d1b6d-utilities\") pod \"redhat-operators-hc2pq\" (UID: \"1cfb7053-fbb3-4e63-b51b-1d17353d1b6d\") " pod="openshift-marketplace/redhat-operators-hc2pq" Dec 15 06:18:26 crc kubenswrapper[4747]: I1215 06:18:26.872392 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtj24\" (UniqueName: \"kubernetes.io/projected/1cfb7053-fbb3-4e63-b51b-1d17353d1b6d-kube-api-access-jtj24\") pod \"redhat-operators-hc2pq\" (UID: \"1cfb7053-fbb3-4e63-b51b-1d17353d1b6d\") " pod="openshift-marketplace/redhat-operators-hc2pq" Dec 15 06:18:26 crc kubenswrapper[4747]: I1215 06:18:26.873081 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cfb7053-fbb3-4e63-b51b-1d17353d1b6d-catalog-content\") pod \"redhat-operators-hc2pq\" (UID: \"1cfb7053-fbb3-4e63-b51b-1d17353d1b6d\") " pod="openshift-marketplace/redhat-operators-hc2pq" Dec 15 06:18:26 crc kubenswrapper[4747]: I1215 06:18:26.873183 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cfb7053-fbb3-4e63-b51b-1d17353d1b6d-utilities\") pod \"redhat-operators-hc2pq\" (UID: \"1cfb7053-fbb3-4e63-b51b-1d17353d1b6d\") " pod="openshift-marketplace/redhat-operators-hc2pq" Dec 15 06:18:26 crc kubenswrapper[4747]: I1215 06:18:26.889213 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtj24\" (UniqueName: \"kubernetes.io/projected/1cfb7053-fbb3-4e63-b51b-1d17353d1b6d-kube-api-access-jtj24\") pod \"redhat-operators-hc2pq\" (UID: \"1cfb7053-fbb3-4e63-b51b-1d17353d1b6d\") " pod="openshift-marketplace/redhat-operators-hc2pq" Dec 15 06:18:27 crc kubenswrapper[4747]: I1215 06:18:27.009247 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hc2pq" Dec 15 06:18:27 crc kubenswrapper[4747]: I1215 06:18:27.465104 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hc2pq"] Dec 15 06:18:28 crc kubenswrapper[4747]: I1215 06:18:28.098187 4747 generic.go:334] "Generic (PLEG): container finished" podID="1cfb7053-fbb3-4e63-b51b-1d17353d1b6d" containerID="4d6b7d9c9c2d1ee66595800b5cb139e8add832b18c26572836a52d243b6447f7" exitCode=0 Dec 15 06:18:28 crc kubenswrapper[4747]: I1215 06:18:28.098322 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hc2pq" event={"ID":"1cfb7053-fbb3-4e63-b51b-1d17353d1b6d","Type":"ContainerDied","Data":"4d6b7d9c9c2d1ee66595800b5cb139e8add832b18c26572836a52d243b6447f7"} Dec 15 06:18:28 crc kubenswrapper[4747]: I1215 06:18:28.099508 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hc2pq" event={"ID":"1cfb7053-fbb3-4e63-b51b-1d17353d1b6d","Type":"ContainerStarted","Data":"a07c0c4d042df323d64add3314eb2d32be1f150f32e3ea7cb4f38e8cc13f6496"} Dec 15 06:18:29 crc kubenswrapper[4747]: I1215 06:18:29.111918 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hc2pq" event={"ID":"1cfb7053-fbb3-4e63-b51b-1d17353d1b6d","Type":"ContainerStarted","Data":"69d8113186c3202b3405bec2bd7f53f3b393713c786f8a164b159ca0c7fe7756"} Dec 15 06:18:31 crc kubenswrapper[4747]: I1215 06:18:31.130633 4747 generic.go:334] "Generic (PLEG): container finished" podID="1cfb7053-fbb3-4e63-b51b-1d17353d1b6d" containerID="69d8113186c3202b3405bec2bd7f53f3b393713c786f8a164b159ca0c7fe7756" exitCode=0 Dec 15 06:18:31 crc kubenswrapper[4747]: I1215 06:18:31.130708 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hc2pq" event={"ID":"1cfb7053-fbb3-4e63-b51b-1d17353d1b6d","Type":"ContainerDied","Data":"69d8113186c3202b3405bec2bd7f53f3b393713c786f8a164b159ca0c7fe7756"} Dec 15 06:18:32 crc kubenswrapper[4747]: I1215 06:18:32.144426 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hc2pq" event={"ID":"1cfb7053-fbb3-4e63-b51b-1d17353d1b6d","Type":"ContainerStarted","Data":"de3f0cfc68b883df4bf92c3b6d6922eea96efa00d51f3658be79cbd4378c3c5f"} Dec 15 06:18:32 crc kubenswrapper[4747]: I1215 06:18:32.170148 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hc2pq" podStartSLOduration=2.634247318 podStartE2EDuration="6.170123054s" podCreationTimestamp="2025-12-15 06:18:26 +0000 UTC" firstStartedPulling="2025-12-15 06:18:28.101412392 +0000 UTC m=+2471.797924309" lastFinishedPulling="2025-12-15 06:18:31.637288127 +0000 UTC m=+2475.333800045" observedRunningTime="2025-12-15 06:18:32.16197721 +0000 UTC m=+2475.858489127" watchObservedRunningTime="2025-12-15 06:18:32.170123054 +0000 UTC m=+2475.866634971" Dec 15 06:18:37 crc kubenswrapper[4747]: I1215 06:18:37.009558 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hc2pq" Dec 15 06:18:37 crc kubenswrapper[4747]: I1215 06:18:37.010392 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hc2pq" Dec 15 06:18:37 crc kubenswrapper[4747]: I1215 06:18:37.054600 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hc2pq" Dec 15 06:18:37 crc kubenswrapper[4747]: I1215 06:18:37.233118 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hc2pq" Dec 15 06:18:37 crc kubenswrapper[4747]: I1215 06:18:37.300427 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hc2pq"] Dec 15 06:18:38 crc kubenswrapper[4747]: I1215 06:18:38.630377 4747 scope.go:117] "RemoveContainer" containerID="9eda37dbf0bf8dcbb67dcb39717ecb7b947d829042887da16cdd4dbcc8944d16" Dec 15 06:18:38 crc kubenswrapper[4747]: E1215 06:18:38.630710 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:18:39 crc kubenswrapper[4747]: I1215 06:18:39.223102 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hc2pq" podUID="1cfb7053-fbb3-4e63-b51b-1d17353d1b6d" containerName="registry-server" containerID="cri-o://de3f0cfc68b883df4bf92c3b6d6922eea96efa00d51f3658be79cbd4378c3c5f" gracePeriod=2 Dec 15 06:18:39 crc kubenswrapper[4747]: I1215 06:18:39.622374 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hc2pq" Dec 15 06:18:39 crc kubenswrapper[4747]: I1215 06:18:39.769548 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cfb7053-fbb3-4e63-b51b-1d17353d1b6d-catalog-content\") pod \"1cfb7053-fbb3-4e63-b51b-1d17353d1b6d\" (UID: \"1cfb7053-fbb3-4e63-b51b-1d17353d1b6d\") " Dec 15 06:18:39 crc kubenswrapper[4747]: I1215 06:18:39.770000 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtj24\" (UniqueName: \"kubernetes.io/projected/1cfb7053-fbb3-4e63-b51b-1d17353d1b6d-kube-api-access-jtj24\") pod \"1cfb7053-fbb3-4e63-b51b-1d17353d1b6d\" (UID: \"1cfb7053-fbb3-4e63-b51b-1d17353d1b6d\") " Dec 15 06:18:39 crc kubenswrapper[4747]: I1215 06:18:39.770154 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cfb7053-fbb3-4e63-b51b-1d17353d1b6d-utilities\") pod \"1cfb7053-fbb3-4e63-b51b-1d17353d1b6d\" (UID: \"1cfb7053-fbb3-4e63-b51b-1d17353d1b6d\") " Dec 15 06:18:39 crc kubenswrapper[4747]: I1215 06:18:39.771063 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cfb7053-fbb3-4e63-b51b-1d17353d1b6d-utilities" (OuterVolumeSpecName: "utilities") pod "1cfb7053-fbb3-4e63-b51b-1d17353d1b6d" (UID: "1cfb7053-fbb3-4e63-b51b-1d17353d1b6d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 06:18:39 crc kubenswrapper[4747]: I1215 06:18:39.778349 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cfb7053-fbb3-4e63-b51b-1d17353d1b6d-kube-api-access-jtj24" (OuterVolumeSpecName: "kube-api-access-jtj24") pod "1cfb7053-fbb3-4e63-b51b-1d17353d1b6d" (UID: "1cfb7053-fbb3-4e63-b51b-1d17353d1b6d"). InnerVolumeSpecName "kube-api-access-jtj24". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 06:18:39 crc kubenswrapper[4747]: I1215 06:18:39.860366 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cfb7053-fbb3-4e63-b51b-1d17353d1b6d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1cfb7053-fbb3-4e63-b51b-1d17353d1b6d" (UID: "1cfb7053-fbb3-4e63-b51b-1d17353d1b6d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 06:18:39 crc kubenswrapper[4747]: I1215 06:18:39.873780 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtj24\" (UniqueName: \"kubernetes.io/projected/1cfb7053-fbb3-4e63-b51b-1d17353d1b6d-kube-api-access-jtj24\") on node \"crc\" DevicePath \"\"" Dec 15 06:18:39 crc kubenswrapper[4747]: I1215 06:18:39.873820 4747 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cfb7053-fbb3-4e63-b51b-1d17353d1b6d-utilities\") on node \"crc\" DevicePath \"\"" Dec 15 06:18:39 crc kubenswrapper[4747]: I1215 06:18:39.873833 4747 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cfb7053-fbb3-4e63-b51b-1d17353d1b6d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 15 06:18:40 crc kubenswrapper[4747]: I1215 06:18:40.237124 4747 generic.go:334] "Generic (PLEG): container finished" podID="1cfb7053-fbb3-4e63-b51b-1d17353d1b6d" containerID="de3f0cfc68b883df4bf92c3b6d6922eea96efa00d51f3658be79cbd4378c3c5f" exitCode=0 Dec 15 06:18:40 crc kubenswrapper[4747]: I1215 06:18:40.237179 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hc2pq" event={"ID":"1cfb7053-fbb3-4e63-b51b-1d17353d1b6d","Type":"ContainerDied","Data":"de3f0cfc68b883df4bf92c3b6d6922eea96efa00d51f3658be79cbd4378c3c5f"} Dec 15 06:18:40 crc kubenswrapper[4747]: I1215 06:18:40.237220 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hc2pq" Dec 15 06:18:40 crc kubenswrapper[4747]: I1215 06:18:40.237248 4747 scope.go:117] "RemoveContainer" containerID="de3f0cfc68b883df4bf92c3b6d6922eea96efa00d51f3658be79cbd4378c3c5f" Dec 15 06:18:40 crc kubenswrapper[4747]: I1215 06:18:40.237226 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hc2pq" event={"ID":"1cfb7053-fbb3-4e63-b51b-1d17353d1b6d","Type":"ContainerDied","Data":"a07c0c4d042df323d64add3314eb2d32be1f150f32e3ea7cb4f38e8cc13f6496"} Dec 15 06:18:40 crc kubenswrapper[4747]: I1215 06:18:40.264577 4747 scope.go:117] "RemoveContainer" containerID="69d8113186c3202b3405bec2bd7f53f3b393713c786f8a164b159ca0c7fe7756" Dec 15 06:18:40 crc kubenswrapper[4747]: I1215 06:18:40.292629 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hc2pq"] Dec 15 06:18:40 crc kubenswrapper[4747]: I1215 06:18:40.299849 4747 scope.go:117] "RemoveContainer" containerID="4d6b7d9c9c2d1ee66595800b5cb139e8add832b18c26572836a52d243b6447f7" Dec 15 06:18:40 crc kubenswrapper[4747]: I1215 06:18:40.306694 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hc2pq"] Dec 15 06:18:40 crc kubenswrapper[4747]: I1215 06:18:40.327783 4747 scope.go:117] "RemoveContainer" containerID="de3f0cfc68b883df4bf92c3b6d6922eea96efa00d51f3658be79cbd4378c3c5f" Dec 15 06:18:40 crc kubenswrapper[4747]: E1215 06:18:40.328167 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de3f0cfc68b883df4bf92c3b6d6922eea96efa00d51f3658be79cbd4378c3c5f\": container with ID starting with de3f0cfc68b883df4bf92c3b6d6922eea96efa00d51f3658be79cbd4378c3c5f not found: ID does not exist" containerID="de3f0cfc68b883df4bf92c3b6d6922eea96efa00d51f3658be79cbd4378c3c5f" Dec 15 06:18:40 crc kubenswrapper[4747]: I1215 06:18:40.328200 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de3f0cfc68b883df4bf92c3b6d6922eea96efa00d51f3658be79cbd4378c3c5f"} err="failed to get container status \"de3f0cfc68b883df4bf92c3b6d6922eea96efa00d51f3658be79cbd4378c3c5f\": rpc error: code = NotFound desc = could not find container \"de3f0cfc68b883df4bf92c3b6d6922eea96efa00d51f3658be79cbd4378c3c5f\": container with ID starting with de3f0cfc68b883df4bf92c3b6d6922eea96efa00d51f3658be79cbd4378c3c5f not found: ID does not exist" Dec 15 06:18:40 crc kubenswrapper[4747]: I1215 06:18:40.328221 4747 scope.go:117] "RemoveContainer" containerID="69d8113186c3202b3405bec2bd7f53f3b393713c786f8a164b159ca0c7fe7756" Dec 15 06:18:40 crc kubenswrapper[4747]: E1215 06:18:40.328504 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69d8113186c3202b3405bec2bd7f53f3b393713c786f8a164b159ca0c7fe7756\": container with ID starting with 69d8113186c3202b3405bec2bd7f53f3b393713c786f8a164b159ca0c7fe7756 not found: ID does not exist" containerID="69d8113186c3202b3405bec2bd7f53f3b393713c786f8a164b159ca0c7fe7756" Dec 15 06:18:40 crc kubenswrapper[4747]: I1215 06:18:40.328523 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69d8113186c3202b3405bec2bd7f53f3b393713c786f8a164b159ca0c7fe7756"} err="failed to get container status \"69d8113186c3202b3405bec2bd7f53f3b393713c786f8a164b159ca0c7fe7756\": rpc error: code = NotFound desc = could not find container \"69d8113186c3202b3405bec2bd7f53f3b393713c786f8a164b159ca0c7fe7756\": container with ID starting with 69d8113186c3202b3405bec2bd7f53f3b393713c786f8a164b159ca0c7fe7756 not found: ID does not exist" Dec 15 06:18:40 crc kubenswrapper[4747]: I1215 06:18:40.328536 4747 scope.go:117] "RemoveContainer" containerID="4d6b7d9c9c2d1ee66595800b5cb139e8add832b18c26572836a52d243b6447f7" Dec 15 06:18:40 crc kubenswrapper[4747]: E1215 06:18:40.328855 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d6b7d9c9c2d1ee66595800b5cb139e8add832b18c26572836a52d243b6447f7\": container with ID starting with 4d6b7d9c9c2d1ee66595800b5cb139e8add832b18c26572836a52d243b6447f7 not found: ID does not exist" containerID="4d6b7d9c9c2d1ee66595800b5cb139e8add832b18c26572836a52d243b6447f7" Dec 15 06:18:40 crc kubenswrapper[4747]: I1215 06:18:40.328871 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d6b7d9c9c2d1ee66595800b5cb139e8add832b18c26572836a52d243b6447f7"} err="failed to get container status \"4d6b7d9c9c2d1ee66595800b5cb139e8add832b18c26572836a52d243b6447f7\": rpc error: code = NotFound desc = could not find container \"4d6b7d9c9c2d1ee66595800b5cb139e8add832b18c26572836a52d243b6447f7\": container with ID starting with 4d6b7d9c9c2d1ee66595800b5cb139e8add832b18c26572836a52d243b6447f7 not found: ID does not exist" Dec 15 06:18:40 crc kubenswrapper[4747]: I1215 06:18:40.661804 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cfb7053-fbb3-4e63-b51b-1d17353d1b6d" path="/var/lib/kubelet/pods/1cfb7053-fbb3-4e63-b51b-1d17353d1b6d/volumes" Dec 15 06:18:49 crc kubenswrapper[4747]: I1215 06:18:49.629622 4747 scope.go:117] "RemoveContainer" containerID="9eda37dbf0bf8dcbb67dcb39717ecb7b947d829042887da16cdd4dbcc8944d16" Dec 15 06:18:49 crc kubenswrapper[4747]: E1215 06:18:49.630513 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:19:03 crc kubenswrapper[4747]: I1215 06:19:03.629204 4747 scope.go:117] "RemoveContainer" containerID="9eda37dbf0bf8dcbb67dcb39717ecb7b947d829042887da16cdd4dbcc8944d16" Dec 15 06:19:03 crc kubenswrapper[4747]: E1215 06:19:03.629965 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:19:15 crc kubenswrapper[4747]: I1215 06:19:15.630144 4747 scope.go:117] "RemoveContainer" containerID="9eda37dbf0bf8dcbb67dcb39717ecb7b947d829042887da16cdd4dbcc8944d16" Dec 15 06:19:15 crc kubenswrapper[4747]: E1215 06:19:15.631132 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:19:30 crc kubenswrapper[4747]: I1215 06:19:30.630530 4747 scope.go:117] "RemoveContainer" containerID="9eda37dbf0bf8dcbb67dcb39717ecb7b947d829042887da16cdd4dbcc8944d16" Dec 15 06:19:30 crc kubenswrapper[4747]: E1215 06:19:30.631678 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:19:44 crc kubenswrapper[4747]: I1215 06:19:44.630089 4747 scope.go:117] "RemoveContainer" containerID="9eda37dbf0bf8dcbb67dcb39717ecb7b947d829042887da16cdd4dbcc8944d16" Dec 15 06:19:44 crc kubenswrapper[4747]: E1215 06:19:44.631025 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:19:57 crc kubenswrapper[4747]: I1215 06:19:57.629352 4747 scope.go:117] "RemoveContainer" containerID="9eda37dbf0bf8dcbb67dcb39717ecb7b947d829042887da16cdd4dbcc8944d16" Dec 15 06:19:57 crc kubenswrapper[4747]: E1215 06:19:57.630115 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:20:09 crc kubenswrapper[4747]: I1215 06:20:09.629916 4747 scope.go:117] "RemoveContainer" containerID="9eda37dbf0bf8dcbb67dcb39717ecb7b947d829042887da16cdd4dbcc8944d16" Dec 15 06:20:09 crc kubenswrapper[4747]: E1215 06:20:09.631196 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:20:20 crc kubenswrapper[4747]: I1215 06:20:20.720389 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-84688cc58c-2mrlh" podUID="01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Dec 15 06:20:24 crc kubenswrapper[4747]: I1215 06:20:24.630136 4747 scope.go:117] "RemoveContainer" containerID="9eda37dbf0bf8dcbb67dcb39717ecb7b947d829042887da16cdd4dbcc8944d16" Dec 15 06:20:24 crc kubenswrapper[4747]: E1215 06:20:24.630988 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:20:39 crc kubenswrapper[4747]: I1215 06:20:39.629480 4747 scope.go:117] "RemoveContainer" containerID="9eda37dbf0bf8dcbb67dcb39717ecb7b947d829042887da16cdd4dbcc8944d16" Dec 15 06:20:39 crc kubenswrapper[4747]: E1215 06:20:39.630444 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:20:50 crc kubenswrapper[4747]: I1215 06:20:50.630204 4747 scope.go:117] "RemoveContainer" containerID="9eda37dbf0bf8dcbb67dcb39717ecb7b947d829042887da16cdd4dbcc8944d16" Dec 15 06:20:50 crc kubenswrapper[4747]: E1215 06:20:50.631113 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:21:02 crc kubenswrapper[4747]: I1215 06:21:02.629834 4747 scope.go:117] "RemoveContainer" containerID="9eda37dbf0bf8dcbb67dcb39717ecb7b947d829042887da16cdd4dbcc8944d16" Dec 15 06:21:02 crc kubenswrapper[4747]: E1215 06:21:02.631060 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:21:15 crc kubenswrapper[4747]: I1215 06:21:15.629381 4747 scope.go:117] "RemoveContainer" containerID="9eda37dbf0bf8dcbb67dcb39717ecb7b947d829042887da16cdd4dbcc8944d16" Dec 15 06:21:15 crc kubenswrapper[4747]: E1215 06:21:15.630224 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:21:30 crc kubenswrapper[4747]: I1215 06:21:30.630322 4747 scope.go:117] "RemoveContainer" containerID="9eda37dbf0bf8dcbb67dcb39717ecb7b947d829042887da16cdd4dbcc8944d16" Dec 15 06:21:30 crc kubenswrapper[4747]: E1215 06:21:30.631149 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:21:41 crc kubenswrapper[4747]: I1215 06:21:41.628983 4747 scope.go:117] "RemoveContainer" containerID="9eda37dbf0bf8dcbb67dcb39717ecb7b947d829042887da16cdd4dbcc8944d16" Dec 15 06:21:41 crc kubenswrapper[4747]: E1215 06:21:41.630009 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:21:53 crc kubenswrapper[4747]: I1215 06:21:53.629542 4747 scope.go:117] "RemoveContainer" containerID="9eda37dbf0bf8dcbb67dcb39717ecb7b947d829042887da16cdd4dbcc8944d16" Dec 15 06:21:53 crc kubenswrapper[4747]: E1215 06:21:53.630716 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:22:04 crc kubenswrapper[4747]: I1215 06:22:04.629490 4747 scope.go:117] "RemoveContainer" containerID="9eda37dbf0bf8dcbb67dcb39717ecb7b947d829042887da16cdd4dbcc8944d16" Dec 15 06:22:04 crc kubenswrapper[4747]: E1215 06:22:04.630381 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:22:19 crc kubenswrapper[4747]: I1215 06:22:19.629710 4747 scope.go:117] "RemoveContainer" containerID="9eda37dbf0bf8dcbb67dcb39717ecb7b947d829042887da16cdd4dbcc8944d16" Dec 15 06:22:19 crc kubenswrapper[4747]: E1215 06:22:19.630783 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:22:32 crc kubenswrapper[4747]: I1215 06:22:32.629966 4747 scope.go:117] "RemoveContainer" containerID="9eda37dbf0bf8dcbb67dcb39717ecb7b947d829042887da16cdd4dbcc8944d16" Dec 15 06:22:33 crc kubenswrapper[4747]: I1215 06:22:33.416144 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" event={"ID":"1d50e5c9-7ce9-40c0-b942-01031654d27c","Type":"ContainerStarted","Data":"60fd1ac4d9c113facb799066aa46580b651f7fcb364cd5b366815b705fbd1cde"} Dec 15 06:23:20 crc kubenswrapper[4747]: I1215 06:23:20.618415 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dsnzp"] Dec 15 06:23:20 crc kubenswrapper[4747]: E1215 06:23:20.620740 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cfb7053-fbb3-4e63-b51b-1d17353d1b6d" containerName="registry-server" Dec 15 06:23:20 crc kubenswrapper[4747]: I1215 06:23:20.620757 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cfb7053-fbb3-4e63-b51b-1d17353d1b6d" containerName="registry-server" Dec 15 06:23:20 crc kubenswrapper[4747]: E1215 06:23:20.620775 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cfb7053-fbb3-4e63-b51b-1d17353d1b6d" containerName="extract-content" Dec 15 06:23:20 crc kubenswrapper[4747]: I1215 06:23:20.620781 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cfb7053-fbb3-4e63-b51b-1d17353d1b6d" containerName="extract-content" Dec 15 06:23:20 crc kubenswrapper[4747]: E1215 06:23:20.620801 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cfb7053-fbb3-4e63-b51b-1d17353d1b6d" containerName="extract-utilities" Dec 15 06:23:20 crc kubenswrapper[4747]: I1215 06:23:20.620806 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cfb7053-fbb3-4e63-b51b-1d17353d1b6d" containerName="extract-utilities" Dec 15 06:23:20 crc kubenswrapper[4747]: I1215 06:23:20.621008 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cfb7053-fbb3-4e63-b51b-1d17353d1b6d" containerName="registry-server" Dec 15 06:23:20 crc kubenswrapper[4747]: I1215 06:23:20.622531 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dsnzp" Dec 15 06:23:20 crc kubenswrapper[4747]: I1215 06:23:20.645423 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dsnzp"] Dec 15 06:23:20 crc kubenswrapper[4747]: I1215 06:23:20.706916 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0976f6d-1736-45d0-884e-a4b9ac6821be-catalog-content\") pod \"redhat-marketplace-dsnzp\" (UID: \"b0976f6d-1736-45d0-884e-a4b9ac6821be\") " pod="openshift-marketplace/redhat-marketplace-dsnzp" Dec 15 06:23:20 crc kubenswrapper[4747]: I1215 06:23:20.707062 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkblh\" (UniqueName: \"kubernetes.io/projected/b0976f6d-1736-45d0-884e-a4b9ac6821be-kube-api-access-mkblh\") pod \"redhat-marketplace-dsnzp\" (UID: \"b0976f6d-1736-45d0-884e-a4b9ac6821be\") " pod="openshift-marketplace/redhat-marketplace-dsnzp" Dec 15 06:23:20 crc kubenswrapper[4747]: I1215 06:23:20.708010 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0976f6d-1736-45d0-884e-a4b9ac6821be-utilities\") pod \"redhat-marketplace-dsnzp\" (UID: \"b0976f6d-1736-45d0-884e-a4b9ac6821be\") " pod="openshift-marketplace/redhat-marketplace-dsnzp" Dec 15 06:23:20 crc kubenswrapper[4747]: I1215 06:23:20.810674 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0976f6d-1736-45d0-884e-a4b9ac6821be-catalog-content\") pod \"redhat-marketplace-dsnzp\" (UID: \"b0976f6d-1736-45d0-884e-a4b9ac6821be\") " pod="openshift-marketplace/redhat-marketplace-dsnzp" Dec 15 06:23:20 crc kubenswrapper[4747]: I1215 06:23:20.811034 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkblh\" (UniqueName: \"kubernetes.io/projected/b0976f6d-1736-45d0-884e-a4b9ac6821be-kube-api-access-mkblh\") pod \"redhat-marketplace-dsnzp\" (UID: \"b0976f6d-1736-45d0-884e-a4b9ac6821be\") " pod="openshift-marketplace/redhat-marketplace-dsnzp" Dec 15 06:23:20 crc kubenswrapper[4747]: I1215 06:23:20.811206 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0976f6d-1736-45d0-884e-a4b9ac6821be-utilities\") pod \"redhat-marketplace-dsnzp\" (UID: \"b0976f6d-1736-45d0-884e-a4b9ac6821be\") " pod="openshift-marketplace/redhat-marketplace-dsnzp" Dec 15 06:23:20 crc kubenswrapper[4747]: I1215 06:23:20.811390 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0976f6d-1736-45d0-884e-a4b9ac6821be-catalog-content\") pod \"redhat-marketplace-dsnzp\" (UID: \"b0976f6d-1736-45d0-884e-a4b9ac6821be\") " pod="openshift-marketplace/redhat-marketplace-dsnzp" Dec 15 06:23:20 crc kubenswrapper[4747]: I1215 06:23:20.811605 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0976f6d-1736-45d0-884e-a4b9ac6821be-utilities\") pod \"redhat-marketplace-dsnzp\" (UID: \"b0976f6d-1736-45d0-884e-a4b9ac6821be\") " pod="openshift-marketplace/redhat-marketplace-dsnzp" Dec 15 06:23:20 crc kubenswrapper[4747]: I1215 06:23:20.832351 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkblh\" (UniqueName: \"kubernetes.io/projected/b0976f6d-1736-45d0-884e-a4b9ac6821be-kube-api-access-mkblh\") pod \"redhat-marketplace-dsnzp\" (UID: \"b0976f6d-1736-45d0-884e-a4b9ac6821be\") " pod="openshift-marketplace/redhat-marketplace-dsnzp" Dec 15 06:23:20 crc kubenswrapper[4747]: I1215 06:23:20.947911 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dsnzp" Dec 15 06:23:21 crc kubenswrapper[4747]: I1215 06:23:21.359970 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dsnzp"] Dec 15 06:23:21 crc kubenswrapper[4747]: I1215 06:23:21.861642 4747 generic.go:334] "Generic (PLEG): container finished" podID="b0976f6d-1736-45d0-884e-a4b9ac6821be" containerID="539ff1e540c7391c990826d3645ce0bd6a82df77e8c70ce39e79e7957d615acf" exitCode=0 Dec 15 06:23:21 crc kubenswrapper[4747]: I1215 06:23:21.861722 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dsnzp" event={"ID":"b0976f6d-1736-45d0-884e-a4b9ac6821be","Type":"ContainerDied","Data":"539ff1e540c7391c990826d3645ce0bd6a82df77e8c70ce39e79e7957d615acf"} Dec 15 06:23:21 crc kubenswrapper[4747]: I1215 06:23:21.861974 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dsnzp" event={"ID":"b0976f6d-1736-45d0-884e-a4b9ac6821be","Type":"ContainerStarted","Data":"dffe78bd27c407612653770370decbdbd09a8d17feffb6fae21907c44f911a8c"} Dec 15 06:23:21 crc kubenswrapper[4747]: I1215 06:23:21.864120 4747 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 15 06:23:22 crc kubenswrapper[4747]: I1215 06:23:22.876467 4747 generic.go:334] "Generic (PLEG): container finished" podID="b0976f6d-1736-45d0-884e-a4b9ac6821be" containerID="8a1b450c68803c00ea36a0c8dd013137345715ca629fc49327be14169dbe7bec" exitCode=0 Dec 15 06:23:22 crc kubenswrapper[4747]: I1215 06:23:22.876568 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dsnzp" event={"ID":"b0976f6d-1736-45d0-884e-a4b9ac6821be","Type":"ContainerDied","Data":"8a1b450c68803c00ea36a0c8dd013137345715ca629fc49327be14169dbe7bec"} Dec 15 06:23:23 crc kubenswrapper[4747]: I1215 06:23:23.890720 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dsnzp" event={"ID":"b0976f6d-1736-45d0-884e-a4b9ac6821be","Type":"ContainerStarted","Data":"e823b3daf6b35fa0f6425b9b01170032ae8fb6c19a3692b25ab4bed062ac78e1"} Dec 15 06:23:23 crc kubenswrapper[4747]: I1215 06:23:23.920025 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dsnzp" podStartSLOduration=2.442367987 podStartE2EDuration="3.920010334s" podCreationTimestamp="2025-12-15 06:23:20 +0000 UTC" firstStartedPulling="2025-12-15 06:23:21.863841685 +0000 UTC m=+2765.560353601" lastFinishedPulling="2025-12-15 06:23:23.34148403 +0000 UTC m=+2767.037995948" observedRunningTime="2025-12-15 06:23:23.912265925 +0000 UTC m=+2767.608777841" watchObservedRunningTime="2025-12-15 06:23:23.920010334 +0000 UTC m=+2767.616522252" Dec 15 06:23:30 crc kubenswrapper[4747]: I1215 06:23:30.948087 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dsnzp" Dec 15 06:23:30 crc kubenswrapper[4747]: I1215 06:23:30.948769 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dsnzp" Dec 15 06:23:30 crc kubenswrapper[4747]: I1215 06:23:30.985306 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dsnzp" Dec 15 06:23:31 crc kubenswrapper[4747]: I1215 06:23:31.997655 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dsnzp" Dec 15 06:23:32 crc kubenswrapper[4747]: I1215 06:23:32.042973 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dsnzp"] Dec 15 06:23:33 crc kubenswrapper[4747]: I1215 06:23:33.982569 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dsnzp" podUID="b0976f6d-1736-45d0-884e-a4b9ac6821be" containerName="registry-server" containerID="cri-o://e823b3daf6b35fa0f6425b9b01170032ae8fb6c19a3692b25ab4bed062ac78e1" gracePeriod=2 Dec 15 06:23:34 crc kubenswrapper[4747]: I1215 06:23:34.392820 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dsnzp" Dec 15 06:23:34 crc kubenswrapper[4747]: I1215 06:23:34.573980 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkblh\" (UniqueName: \"kubernetes.io/projected/b0976f6d-1736-45d0-884e-a4b9ac6821be-kube-api-access-mkblh\") pod \"b0976f6d-1736-45d0-884e-a4b9ac6821be\" (UID: \"b0976f6d-1736-45d0-884e-a4b9ac6821be\") " Dec 15 06:23:34 crc kubenswrapper[4747]: I1215 06:23:34.574414 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0976f6d-1736-45d0-884e-a4b9ac6821be-utilities\") pod \"b0976f6d-1736-45d0-884e-a4b9ac6821be\" (UID: \"b0976f6d-1736-45d0-884e-a4b9ac6821be\") " Dec 15 06:23:34 crc kubenswrapper[4747]: I1215 06:23:34.574597 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0976f6d-1736-45d0-884e-a4b9ac6821be-catalog-content\") pod \"b0976f6d-1736-45d0-884e-a4b9ac6821be\" (UID: \"b0976f6d-1736-45d0-884e-a4b9ac6821be\") " Dec 15 06:23:34 crc kubenswrapper[4747]: I1215 06:23:34.575763 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0976f6d-1736-45d0-884e-a4b9ac6821be-utilities" (OuterVolumeSpecName: "utilities") pod "b0976f6d-1736-45d0-884e-a4b9ac6821be" (UID: "b0976f6d-1736-45d0-884e-a4b9ac6821be"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 06:23:34 crc kubenswrapper[4747]: I1215 06:23:34.583510 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0976f6d-1736-45d0-884e-a4b9ac6821be-kube-api-access-mkblh" (OuterVolumeSpecName: "kube-api-access-mkblh") pod "b0976f6d-1736-45d0-884e-a4b9ac6821be" (UID: "b0976f6d-1736-45d0-884e-a4b9ac6821be"). InnerVolumeSpecName "kube-api-access-mkblh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 06:23:34 crc kubenswrapper[4747]: I1215 06:23:34.590558 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0976f6d-1736-45d0-884e-a4b9ac6821be-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b0976f6d-1736-45d0-884e-a4b9ac6821be" (UID: "b0976f6d-1736-45d0-884e-a4b9ac6821be"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 06:23:34 crc kubenswrapper[4747]: I1215 06:23:34.677967 4747 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0976f6d-1736-45d0-884e-a4b9ac6821be-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 15 06:23:34 crc kubenswrapper[4747]: I1215 06:23:34.678008 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkblh\" (UniqueName: \"kubernetes.io/projected/b0976f6d-1736-45d0-884e-a4b9ac6821be-kube-api-access-mkblh\") on node \"crc\" DevicePath \"\"" Dec 15 06:23:34 crc kubenswrapper[4747]: I1215 06:23:34.678027 4747 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0976f6d-1736-45d0-884e-a4b9ac6821be-utilities\") on node \"crc\" DevicePath \"\"" Dec 15 06:23:34 crc kubenswrapper[4747]: I1215 06:23:34.991899 4747 generic.go:334] "Generic (PLEG): container finished" podID="b0976f6d-1736-45d0-884e-a4b9ac6821be" containerID="e823b3daf6b35fa0f6425b9b01170032ae8fb6c19a3692b25ab4bed062ac78e1" exitCode=0 Dec 15 06:23:34 crc kubenswrapper[4747]: I1215 06:23:34.991973 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dsnzp" event={"ID":"b0976f6d-1736-45d0-884e-a4b9ac6821be","Type":"ContainerDied","Data":"e823b3daf6b35fa0f6425b9b01170032ae8fb6c19a3692b25ab4bed062ac78e1"} Dec 15 06:23:34 crc kubenswrapper[4747]: I1215 06:23:34.991998 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dsnzp" Dec 15 06:23:34 crc kubenswrapper[4747]: I1215 06:23:34.992037 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dsnzp" event={"ID":"b0976f6d-1736-45d0-884e-a4b9ac6821be","Type":"ContainerDied","Data":"dffe78bd27c407612653770370decbdbd09a8d17feffb6fae21907c44f911a8c"} Dec 15 06:23:34 crc kubenswrapper[4747]: I1215 06:23:34.992069 4747 scope.go:117] "RemoveContainer" containerID="e823b3daf6b35fa0f6425b9b01170032ae8fb6c19a3692b25ab4bed062ac78e1" Dec 15 06:23:35 crc kubenswrapper[4747]: I1215 06:23:35.010890 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dsnzp"] Dec 15 06:23:35 crc kubenswrapper[4747]: I1215 06:23:35.011690 4747 scope.go:117] "RemoveContainer" containerID="8a1b450c68803c00ea36a0c8dd013137345715ca629fc49327be14169dbe7bec" Dec 15 06:23:35 crc kubenswrapper[4747]: I1215 06:23:35.017628 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dsnzp"] Dec 15 06:23:35 crc kubenswrapper[4747]: I1215 06:23:35.031862 4747 scope.go:117] "RemoveContainer" containerID="539ff1e540c7391c990826d3645ce0bd6a82df77e8c70ce39e79e7957d615acf" Dec 15 06:23:35 crc kubenswrapper[4747]: I1215 06:23:35.071427 4747 scope.go:117] "RemoveContainer" containerID="e823b3daf6b35fa0f6425b9b01170032ae8fb6c19a3692b25ab4bed062ac78e1" Dec 15 06:23:35 crc kubenswrapper[4747]: E1215 06:23:35.071850 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e823b3daf6b35fa0f6425b9b01170032ae8fb6c19a3692b25ab4bed062ac78e1\": container with ID starting with e823b3daf6b35fa0f6425b9b01170032ae8fb6c19a3692b25ab4bed062ac78e1 not found: ID does not exist" containerID="e823b3daf6b35fa0f6425b9b01170032ae8fb6c19a3692b25ab4bed062ac78e1" Dec 15 06:23:35 crc kubenswrapper[4747]: I1215 06:23:35.071892 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e823b3daf6b35fa0f6425b9b01170032ae8fb6c19a3692b25ab4bed062ac78e1"} err="failed to get container status \"e823b3daf6b35fa0f6425b9b01170032ae8fb6c19a3692b25ab4bed062ac78e1\": rpc error: code = NotFound desc = could not find container \"e823b3daf6b35fa0f6425b9b01170032ae8fb6c19a3692b25ab4bed062ac78e1\": container with ID starting with e823b3daf6b35fa0f6425b9b01170032ae8fb6c19a3692b25ab4bed062ac78e1 not found: ID does not exist" Dec 15 06:23:35 crc kubenswrapper[4747]: I1215 06:23:35.071922 4747 scope.go:117] "RemoveContainer" containerID="8a1b450c68803c00ea36a0c8dd013137345715ca629fc49327be14169dbe7bec" Dec 15 06:23:35 crc kubenswrapper[4747]: E1215 06:23:35.072299 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a1b450c68803c00ea36a0c8dd013137345715ca629fc49327be14169dbe7bec\": container with ID starting with 8a1b450c68803c00ea36a0c8dd013137345715ca629fc49327be14169dbe7bec not found: ID does not exist" containerID="8a1b450c68803c00ea36a0c8dd013137345715ca629fc49327be14169dbe7bec" Dec 15 06:23:35 crc kubenswrapper[4747]: I1215 06:23:35.072335 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a1b450c68803c00ea36a0c8dd013137345715ca629fc49327be14169dbe7bec"} err="failed to get container status \"8a1b450c68803c00ea36a0c8dd013137345715ca629fc49327be14169dbe7bec\": rpc error: code = NotFound desc = could not find container \"8a1b450c68803c00ea36a0c8dd013137345715ca629fc49327be14169dbe7bec\": container with ID starting with 8a1b450c68803c00ea36a0c8dd013137345715ca629fc49327be14169dbe7bec not found: ID does not exist" Dec 15 06:23:35 crc kubenswrapper[4747]: I1215 06:23:35.072361 4747 scope.go:117] "RemoveContainer" containerID="539ff1e540c7391c990826d3645ce0bd6a82df77e8c70ce39e79e7957d615acf" Dec 15 06:23:35 crc kubenswrapper[4747]: E1215 06:23:35.072649 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"539ff1e540c7391c990826d3645ce0bd6a82df77e8c70ce39e79e7957d615acf\": container with ID starting with 539ff1e540c7391c990826d3645ce0bd6a82df77e8c70ce39e79e7957d615acf not found: ID does not exist" containerID="539ff1e540c7391c990826d3645ce0bd6a82df77e8c70ce39e79e7957d615acf" Dec 15 06:23:35 crc kubenswrapper[4747]: I1215 06:23:35.072680 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"539ff1e540c7391c990826d3645ce0bd6a82df77e8c70ce39e79e7957d615acf"} err="failed to get container status \"539ff1e540c7391c990826d3645ce0bd6a82df77e8c70ce39e79e7957d615acf\": rpc error: code = NotFound desc = could not find container \"539ff1e540c7391c990826d3645ce0bd6a82df77e8c70ce39e79e7957d615acf\": container with ID starting with 539ff1e540c7391c990826d3645ce0bd6a82df77e8c70ce39e79e7957d615acf not found: ID does not exist" Dec 15 06:23:36 crc kubenswrapper[4747]: I1215 06:23:36.639480 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0976f6d-1736-45d0-884e-a4b9ac6821be" path="/var/lib/kubelet/pods/b0976f6d-1736-45d0-884e-a4b9ac6821be/volumes" Dec 15 06:24:28 crc kubenswrapper[4747]: I1215 06:24:28.361093 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mnbk6"] Dec 15 06:24:28 crc kubenswrapper[4747]: E1215 06:24:28.362472 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0976f6d-1736-45d0-884e-a4b9ac6821be" containerName="extract-utilities" Dec 15 06:24:28 crc kubenswrapper[4747]: I1215 06:24:28.362492 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0976f6d-1736-45d0-884e-a4b9ac6821be" containerName="extract-utilities" Dec 15 06:24:28 crc kubenswrapper[4747]: E1215 06:24:28.362512 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0976f6d-1736-45d0-884e-a4b9ac6821be" containerName="registry-server" Dec 15 06:24:28 crc kubenswrapper[4747]: I1215 06:24:28.362519 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0976f6d-1736-45d0-884e-a4b9ac6821be" containerName="registry-server" Dec 15 06:24:28 crc kubenswrapper[4747]: E1215 06:24:28.362531 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0976f6d-1736-45d0-884e-a4b9ac6821be" containerName="extract-content" Dec 15 06:24:28 crc kubenswrapper[4747]: I1215 06:24:28.362537 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0976f6d-1736-45d0-884e-a4b9ac6821be" containerName="extract-content" Dec 15 06:24:28 crc kubenswrapper[4747]: I1215 06:24:28.362813 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0976f6d-1736-45d0-884e-a4b9ac6821be" containerName="registry-server" Dec 15 06:24:28 crc kubenswrapper[4747]: I1215 06:24:28.364188 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mnbk6" Dec 15 06:24:28 crc kubenswrapper[4747]: I1215 06:24:28.373061 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mnbk6"] Dec 15 06:24:28 crc kubenswrapper[4747]: I1215 06:24:28.455294 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32e8e73b-aa97-410b-9d14-767d0482a349-utilities\") pod \"community-operators-mnbk6\" (UID: \"32e8e73b-aa97-410b-9d14-767d0482a349\") " pod="openshift-marketplace/community-operators-mnbk6" Dec 15 06:24:28 crc kubenswrapper[4747]: I1215 06:24:28.455384 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32e8e73b-aa97-410b-9d14-767d0482a349-catalog-content\") pod \"community-operators-mnbk6\" (UID: \"32e8e73b-aa97-410b-9d14-767d0482a349\") " pod="openshift-marketplace/community-operators-mnbk6" Dec 15 06:24:28 crc kubenswrapper[4747]: I1215 06:24:28.455548 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhtl9\" (UniqueName: \"kubernetes.io/projected/32e8e73b-aa97-410b-9d14-767d0482a349-kube-api-access-fhtl9\") pod \"community-operators-mnbk6\" (UID: \"32e8e73b-aa97-410b-9d14-767d0482a349\") " pod="openshift-marketplace/community-operators-mnbk6" Dec 15 06:24:28 crc kubenswrapper[4747]: I1215 06:24:28.557551 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhtl9\" (UniqueName: \"kubernetes.io/projected/32e8e73b-aa97-410b-9d14-767d0482a349-kube-api-access-fhtl9\") pod \"community-operators-mnbk6\" (UID: \"32e8e73b-aa97-410b-9d14-767d0482a349\") " pod="openshift-marketplace/community-operators-mnbk6" Dec 15 06:24:28 crc kubenswrapper[4747]: I1215 06:24:28.557789 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32e8e73b-aa97-410b-9d14-767d0482a349-utilities\") pod \"community-operators-mnbk6\" (UID: \"32e8e73b-aa97-410b-9d14-767d0482a349\") " pod="openshift-marketplace/community-operators-mnbk6" Dec 15 06:24:28 crc kubenswrapper[4747]: I1215 06:24:28.557852 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32e8e73b-aa97-410b-9d14-767d0482a349-catalog-content\") pod \"community-operators-mnbk6\" (UID: \"32e8e73b-aa97-410b-9d14-767d0482a349\") " pod="openshift-marketplace/community-operators-mnbk6" Dec 15 06:24:28 crc kubenswrapper[4747]: I1215 06:24:28.558375 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32e8e73b-aa97-410b-9d14-767d0482a349-utilities\") pod \"community-operators-mnbk6\" (UID: \"32e8e73b-aa97-410b-9d14-767d0482a349\") " pod="openshift-marketplace/community-operators-mnbk6" Dec 15 06:24:28 crc kubenswrapper[4747]: I1215 06:24:28.558444 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32e8e73b-aa97-410b-9d14-767d0482a349-catalog-content\") pod \"community-operators-mnbk6\" (UID: \"32e8e73b-aa97-410b-9d14-767d0482a349\") " pod="openshift-marketplace/community-operators-mnbk6" Dec 15 06:24:28 crc kubenswrapper[4747]: I1215 06:24:28.581733 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhtl9\" (UniqueName: \"kubernetes.io/projected/32e8e73b-aa97-410b-9d14-767d0482a349-kube-api-access-fhtl9\") pod \"community-operators-mnbk6\" (UID: \"32e8e73b-aa97-410b-9d14-767d0482a349\") " pod="openshift-marketplace/community-operators-mnbk6" Dec 15 06:24:28 crc kubenswrapper[4747]: I1215 06:24:28.694767 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mnbk6" Dec 15 06:24:29 crc kubenswrapper[4747]: I1215 06:24:29.187822 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mnbk6"] Dec 15 06:24:29 crc kubenswrapper[4747]: I1215 06:24:29.489895 4747 generic.go:334] "Generic (PLEG): container finished" podID="32e8e73b-aa97-410b-9d14-767d0482a349" containerID="3acc61570bbdece164f6d86815086474a6157ff7fb940dc0660d51d07332e7e2" exitCode=0 Dec 15 06:24:29 crc kubenswrapper[4747]: I1215 06:24:29.490312 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mnbk6" event={"ID":"32e8e73b-aa97-410b-9d14-767d0482a349","Type":"ContainerDied","Data":"3acc61570bbdece164f6d86815086474a6157ff7fb940dc0660d51d07332e7e2"} Dec 15 06:24:29 crc kubenswrapper[4747]: I1215 06:24:29.490358 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mnbk6" event={"ID":"32e8e73b-aa97-410b-9d14-767d0482a349","Type":"ContainerStarted","Data":"b066124d9d6b11929e5e5b3934b3b3a77b9a17822515fc611e70862939ac99f3"} Dec 15 06:24:30 crc kubenswrapper[4747]: I1215 06:24:30.505647 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mnbk6" event={"ID":"32e8e73b-aa97-410b-9d14-767d0482a349","Type":"ContainerStarted","Data":"b5d1f187fbe2fcc8d296b41971187aff90f95747a70c25d7a89567d51316df0a"} Dec 15 06:24:31 crc kubenswrapper[4747]: I1215 06:24:31.517883 4747 generic.go:334] "Generic (PLEG): container finished" podID="32e8e73b-aa97-410b-9d14-767d0482a349" containerID="b5d1f187fbe2fcc8d296b41971187aff90f95747a70c25d7a89567d51316df0a" exitCode=0 Dec 15 06:24:31 crc kubenswrapper[4747]: I1215 06:24:31.517992 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mnbk6" event={"ID":"32e8e73b-aa97-410b-9d14-767d0482a349","Type":"ContainerDied","Data":"b5d1f187fbe2fcc8d296b41971187aff90f95747a70c25d7a89567d51316df0a"} Dec 15 06:24:32 crc kubenswrapper[4747]: I1215 06:24:32.538964 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mnbk6" event={"ID":"32e8e73b-aa97-410b-9d14-767d0482a349","Type":"ContainerStarted","Data":"8991a1030e1a61cd74cc453176afd2c5a6146a5123cbc6884d81d761e453bf17"} Dec 15 06:24:32 crc kubenswrapper[4747]: I1215 06:24:32.562439 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mnbk6" podStartSLOduration=1.883177958 podStartE2EDuration="4.562421465s" podCreationTimestamp="2025-12-15 06:24:28 +0000 UTC" firstStartedPulling="2025-12-15 06:24:29.492311506 +0000 UTC m=+2833.188823423" lastFinishedPulling="2025-12-15 06:24:32.171555014 +0000 UTC m=+2835.868066930" observedRunningTime="2025-12-15 06:24:32.554256355 +0000 UTC m=+2836.250768271" watchObservedRunningTime="2025-12-15 06:24:32.562421465 +0000 UTC m=+2836.258933382" Dec 15 06:24:38 crc kubenswrapper[4747]: I1215 06:24:38.695124 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mnbk6" Dec 15 06:24:38 crc kubenswrapper[4747]: I1215 06:24:38.695835 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mnbk6" Dec 15 06:24:38 crc kubenswrapper[4747]: I1215 06:24:38.781272 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mnbk6" Dec 15 06:24:39 crc kubenswrapper[4747]: I1215 06:24:39.650553 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mnbk6" Dec 15 06:24:39 crc kubenswrapper[4747]: I1215 06:24:39.704487 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mnbk6"] Dec 15 06:24:41 crc kubenswrapper[4747]: I1215 06:24:41.629635 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mnbk6" podUID="32e8e73b-aa97-410b-9d14-767d0482a349" containerName="registry-server" containerID="cri-o://8991a1030e1a61cd74cc453176afd2c5a6146a5123cbc6884d81d761e453bf17" gracePeriod=2 Dec 15 06:24:42 crc kubenswrapper[4747]: I1215 06:24:42.039907 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mnbk6" Dec 15 06:24:42 crc kubenswrapper[4747]: I1215 06:24:42.100717 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32e8e73b-aa97-410b-9d14-767d0482a349-catalog-content\") pod \"32e8e73b-aa97-410b-9d14-767d0482a349\" (UID: \"32e8e73b-aa97-410b-9d14-767d0482a349\") " Dec 15 06:24:42 crc kubenswrapper[4747]: I1215 06:24:42.100774 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32e8e73b-aa97-410b-9d14-767d0482a349-utilities\") pod \"32e8e73b-aa97-410b-9d14-767d0482a349\" (UID: \"32e8e73b-aa97-410b-9d14-767d0482a349\") " Dec 15 06:24:42 crc kubenswrapper[4747]: I1215 06:24:42.100878 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhtl9\" (UniqueName: \"kubernetes.io/projected/32e8e73b-aa97-410b-9d14-767d0482a349-kube-api-access-fhtl9\") pod \"32e8e73b-aa97-410b-9d14-767d0482a349\" (UID: \"32e8e73b-aa97-410b-9d14-767d0482a349\") " Dec 15 06:24:42 crc kubenswrapper[4747]: I1215 06:24:42.102707 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32e8e73b-aa97-410b-9d14-767d0482a349-utilities" (OuterVolumeSpecName: "utilities") pod "32e8e73b-aa97-410b-9d14-767d0482a349" (UID: "32e8e73b-aa97-410b-9d14-767d0482a349"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 06:24:42 crc kubenswrapper[4747]: I1215 06:24:42.107898 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32e8e73b-aa97-410b-9d14-767d0482a349-kube-api-access-fhtl9" (OuterVolumeSpecName: "kube-api-access-fhtl9") pod "32e8e73b-aa97-410b-9d14-767d0482a349" (UID: "32e8e73b-aa97-410b-9d14-767d0482a349"). InnerVolumeSpecName "kube-api-access-fhtl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 06:24:42 crc kubenswrapper[4747]: I1215 06:24:42.148899 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32e8e73b-aa97-410b-9d14-767d0482a349-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "32e8e73b-aa97-410b-9d14-767d0482a349" (UID: "32e8e73b-aa97-410b-9d14-767d0482a349"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 06:24:42 crc kubenswrapper[4747]: I1215 06:24:42.202831 4747 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32e8e73b-aa97-410b-9d14-767d0482a349-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 15 06:24:42 crc kubenswrapper[4747]: I1215 06:24:42.202864 4747 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32e8e73b-aa97-410b-9d14-767d0482a349-utilities\") on node \"crc\" DevicePath \"\"" Dec 15 06:24:42 crc kubenswrapper[4747]: I1215 06:24:42.202874 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhtl9\" (UniqueName: \"kubernetes.io/projected/32e8e73b-aa97-410b-9d14-767d0482a349-kube-api-access-fhtl9\") on node \"crc\" DevicePath \"\"" Dec 15 06:24:42 crc kubenswrapper[4747]: I1215 06:24:42.642998 4747 generic.go:334] "Generic (PLEG): container finished" podID="32e8e73b-aa97-410b-9d14-767d0482a349" containerID="8991a1030e1a61cd74cc453176afd2c5a6146a5123cbc6884d81d761e453bf17" exitCode=0 Dec 15 06:24:42 crc kubenswrapper[4747]: I1215 06:24:42.644391 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mnbk6" event={"ID":"32e8e73b-aa97-410b-9d14-767d0482a349","Type":"ContainerDied","Data":"8991a1030e1a61cd74cc453176afd2c5a6146a5123cbc6884d81d761e453bf17"} Dec 15 06:24:42 crc kubenswrapper[4747]: I1215 06:24:42.644499 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mnbk6" event={"ID":"32e8e73b-aa97-410b-9d14-767d0482a349","Type":"ContainerDied","Data":"b066124d9d6b11929e5e5b3934b3b3a77b9a17822515fc611e70862939ac99f3"} Dec 15 06:24:42 crc kubenswrapper[4747]: I1215 06:24:42.644589 4747 scope.go:117] "RemoveContainer" containerID="8991a1030e1a61cd74cc453176afd2c5a6146a5123cbc6884d81d761e453bf17" Dec 15 06:24:42 crc kubenswrapper[4747]: I1215 06:24:42.644842 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mnbk6" Dec 15 06:24:42 crc kubenswrapper[4747]: I1215 06:24:42.675217 4747 scope.go:117] "RemoveContainer" containerID="b5d1f187fbe2fcc8d296b41971187aff90f95747a70c25d7a89567d51316df0a" Dec 15 06:24:42 crc kubenswrapper[4747]: I1215 06:24:42.677520 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mnbk6"] Dec 15 06:24:42 crc kubenswrapper[4747]: I1215 06:24:42.683876 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mnbk6"] Dec 15 06:24:42 crc kubenswrapper[4747]: I1215 06:24:42.720243 4747 scope.go:117] "RemoveContainer" containerID="3acc61570bbdece164f6d86815086474a6157ff7fb940dc0660d51d07332e7e2" Dec 15 06:24:42 crc kubenswrapper[4747]: I1215 06:24:42.741982 4747 scope.go:117] "RemoveContainer" containerID="8991a1030e1a61cd74cc453176afd2c5a6146a5123cbc6884d81d761e453bf17" Dec 15 06:24:42 crc kubenswrapper[4747]: E1215 06:24:42.742506 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8991a1030e1a61cd74cc453176afd2c5a6146a5123cbc6884d81d761e453bf17\": container with ID starting with 8991a1030e1a61cd74cc453176afd2c5a6146a5123cbc6884d81d761e453bf17 not found: ID does not exist" containerID="8991a1030e1a61cd74cc453176afd2c5a6146a5123cbc6884d81d761e453bf17" Dec 15 06:24:42 crc kubenswrapper[4747]: I1215 06:24:42.742553 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8991a1030e1a61cd74cc453176afd2c5a6146a5123cbc6884d81d761e453bf17"} err="failed to get container status \"8991a1030e1a61cd74cc453176afd2c5a6146a5123cbc6884d81d761e453bf17\": rpc error: code = NotFound desc = could not find container \"8991a1030e1a61cd74cc453176afd2c5a6146a5123cbc6884d81d761e453bf17\": container with ID starting with 8991a1030e1a61cd74cc453176afd2c5a6146a5123cbc6884d81d761e453bf17 not found: ID does not exist" Dec 15 06:24:42 crc kubenswrapper[4747]: I1215 06:24:42.742584 4747 scope.go:117] "RemoveContainer" containerID="b5d1f187fbe2fcc8d296b41971187aff90f95747a70c25d7a89567d51316df0a" Dec 15 06:24:42 crc kubenswrapper[4747]: E1215 06:24:42.743042 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5d1f187fbe2fcc8d296b41971187aff90f95747a70c25d7a89567d51316df0a\": container with ID starting with b5d1f187fbe2fcc8d296b41971187aff90f95747a70c25d7a89567d51316df0a not found: ID does not exist" containerID="b5d1f187fbe2fcc8d296b41971187aff90f95747a70c25d7a89567d51316df0a" Dec 15 06:24:42 crc kubenswrapper[4747]: I1215 06:24:42.743083 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5d1f187fbe2fcc8d296b41971187aff90f95747a70c25d7a89567d51316df0a"} err="failed to get container status \"b5d1f187fbe2fcc8d296b41971187aff90f95747a70c25d7a89567d51316df0a\": rpc error: code = NotFound desc = could not find container \"b5d1f187fbe2fcc8d296b41971187aff90f95747a70c25d7a89567d51316df0a\": container with ID starting with b5d1f187fbe2fcc8d296b41971187aff90f95747a70c25d7a89567d51316df0a not found: ID does not exist" Dec 15 06:24:42 crc kubenswrapper[4747]: I1215 06:24:42.743101 4747 scope.go:117] "RemoveContainer" containerID="3acc61570bbdece164f6d86815086474a6157ff7fb940dc0660d51d07332e7e2" Dec 15 06:24:42 crc kubenswrapper[4747]: E1215 06:24:42.743449 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3acc61570bbdece164f6d86815086474a6157ff7fb940dc0660d51d07332e7e2\": container with ID starting with 3acc61570bbdece164f6d86815086474a6157ff7fb940dc0660d51d07332e7e2 not found: ID does not exist" containerID="3acc61570bbdece164f6d86815086474a6157ff7fb940dc0660d51d07332e7e2" Dec 15 06:24:42 crc kubenswrapper[4747]: I1215 06:24:42.743480 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3acc61570bbdece164f6d86815086474a6157ff7fb940dc0660d51d07332e7e2"} err="failed to get container status \"3acc61570bbdece164f6d86815086474a6157ff7fb940dc0660d51d07332e7e2\": rpc error: code = NotFound desc = could not find container \"3acc61570bbdece164f6d86815086474a6157ff7fb940dc0660d51d07332e7e2\": container with ID starting with 3acc61570bbdece164f6d86815086474a6157ff7fb940dc0660d51d07332e7e2 not found: ID does not exist" Dec 15 06:24:44 crc kubenswrapper[4747]: I1215 06:24:44.639502 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32e8e73b-aa97-410b-9d14-767d0482a349" path="/var/lib/kubelet/pods/32e8e73b-aa97-410b-9d14-767d0482a349/volumes" Dec 15 06:24:58 crc kubenswrapper[4747]: I1215 06:24:58.865618 4747 patch_prober.go:28] interesting pod/machine-config-daemon-nldtn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 15 06:24:58 crc kubenswrapper[4747]: I1215 06:24:58.866395 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 15 06:25:28 crc kubenswrapper[4747]: I1215 06:25:28.865206 4747 patch_prober.go:28] interesting pod/machine-config-daemon-nldtn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 15 06:25:28 crc kubenswrapper[4747]: I1215 06:25:28.865837 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 15 06:25:58 crc kubenswrapper[4747]: I1215 06:25:58.865224 4747 patch_prober.go:28] interesting pod/machine-config-daemon-nldtn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 15 06:25:58 crc kubenswrapper[4747]: I1215 06:25:58.865660 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 15 06:25:58 crc kubenswrapper[4747]: I1215 06:25:58.865705 4747 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" Dec 15 06:25:58 crc kubenswrapper[4747]: I1215 06:25:58.866258 4747 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"60fd1ac4d9c113facb799066aa46580b651f7fcb364cd5b366815b705fbd1cde"} pod="openshift-machine-config-operator/machine-config-daemon-nldtn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 15 06:25:58 crc kubenswrapper[4747]: I1215 06:25:58.866320 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" containerID="cri-o://60fd1ac4d9c113facb799066aa46580b651f7fcb364cd5b366815b705fbd1cde" gracePeriod=600 Dec 15 06:25:59 crc kubenswrapper[4747]: I1215 06:25:59.355987 4747 generic.go:334] "Generic (PLEG): container finished" podID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerID="60fd1ac4d9c113facb799066aa46580b651f7fcb364cd5b366815b705fbd1cde" exitCode=0 Dec 15 06:25:59 crc kubenswrapper[4747]: I1215 06:25:59.356086 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" event={"ID":"1d50e5c9-7ce9-40c0-b942-01031654d27c","Type":"ContainerDied","Data":"60fd1ac4d9c113facb799066aa46580b651f7fcb364cd5b366815b705fbd1cde"} Dec 15 06:25:59 crc kubenswrapper[4747]: I1215 06:25:59.356438 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" event={"ID":"1d50e5c9-7ce9-40c0-b942-01031654d27c","Type":"ContainerStarted","Data":"44cd12cf9c71b73bd3a2782f4e6fd9beb9278b65a8ace12d37e33f7b4a372d97"} Dec 15 06:25:59 crc kubenswrapper[4747]: I1215 06:25:59.356467 4747 scope.go:117] "RemoveContainer" containerID="9eda37dbf0bf8dcbb67dcb39717ecb7b947d829042887da16cdd4dbcc8944d16" Dec 15 06:27:48 crc kubenswrapper[4747]: I1215 06:27:48.880711 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wspc8"] Dec 15 06:27:48 crc kubenswrapper[4747]: E1215 06:27:48.881556 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32e8e73b-aa97-410b-9d14-767d0482a349" containerName="registry-server" Dec 15 06:27:48 crc kubenswrapper[4747]: I1215 06:27:48.881571 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="32e8e73b-aa97-410b-9d14-767d0482a349" containerName="registry-server" Dec 15 06:27:48 crc kubenswrapper[4747]: E1215 06:27:48.881588 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32e8e73b-aa97-410b-9d14-767d0482a349" containerName="extract-content" Dec 15 06:27:48 crc kubenswrapper[4747]: I1215 06:27:48.881594 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="32e8e73b-aa97-410b-9d14-767d0482a349" containerName="extract-content" Dec 15 06:27:48 crc kubenswrapper[4747]: E1215 06:27:48.881626 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32e8e73b-aa97-410b-9d14-767d0482a349" containerName="extract-utilities" Dec 15 06:27:48 crc kubenswrapper[4747]: I1215 06:27:48.881631 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="32e8e73b-aa97-410b-9d14-767d0482a349" containerName="extract-utilities" Dec 15 06:27:48 crc kubenswrapper[4747]: I1215 06:27:48.881812 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="32e8e73b-aa97-410b-9d14-767d0482a349" containerName="registry-server" Dec 15 06:27:48 crc kubenswrapper[4747]: I1215 06:27:48.883061 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wspc8" Dec 15 06:27:48 crc kubenswrapper[4747]: I1215 06:27:48.894809 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wspc8"] Dec 15 06:27:48 crc kubenswrapper[4747]: I1215 06:27:48.919437 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwdxl\" (UniqueName: \"kubernetes.io/projected/da216c79-08a7-425e-9843-e716a087d989-kube-api-access-mwdxl\") pod \"certified-operators-wspc8\" (UID: \"da216c79-08a7-425e-9843-e716a087d989\") " pod="openshift-marketplace/certified-operators-wspc8" Dec 15 06:27:48 crc kubenswrapper[4747]: I1215 06:27:48.919637 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da216c79-08a7-425e-9843-e716a087d989-catalog-content\") pod \"certified-operators-wspc8\" (UID: \"da216c79-08a7-425e-9843-e716a087d989\") " pod="openshift-marketplace/certified-operators-wspc8" Dec 15 06:27:48 crc kubenswrapper[4747]: I1215 06:27:48.919763 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da216c79-08a7-425e-9843-e716a087d989-utilities\") pod \"certified-operators-wspc8\" (UID: \"da216c79-08a7-425e-9843-e716a087d989\") " pod="openshift-marketplace/certified-operators-wspc8" Dec 15 06:27:49 crc kubenswrapper[4747]: I1215 06:27:49.022252 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da216c79-08a7-425e-9843-e716a087d989-catalog-content\") pod \"certified-operators-wspc8\" (UID: \"da216c79-08a7-425e-9843-e716a087d989\") " pod="openshift-marketplace/certified-operators-wspc8" Dec 15 06:27:49 crc kubenswrapper[4747]: I1215 06:27:49.022364 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da216c79-08a7-425e-9843-e716a087d989-utilities\") pod \"certified-operators-wspc8\" (UID: \"da216c79-08a7-425e-9843-e716a087d989\") " pod="openshift-marketplace/certified-operators-wspc8" Dec 15 06:27:49 crc kubenswrapper[4747]: I1215 06:27:49.022627 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwdxl\" (UniqueName: \"kubernetes.io/projected/da216c79-08a7-425e-9843-e716a087d989-kube-api-access-mwdxl\") pod \"certified-operators-wspc8\" (UID: \"da216c79-08a7-425e-9843-e716a087d989\") " pod="openshift-marketplace/certified-operators-wspc8" Dec 15 06:27:49 crc kubenswrapper[4747]: I1215 06:27:49.022817 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da216c79-08a7-425e-9843-e716a087d989-catalog-content\") pod \"certified-operators-wspc8\" (UID: \"da216c79-08a7-425e-9843-e716a087d989\") " pod="openshift-marketplace/certified-operators-wspc8" Dec 15 06:27:49 crc kubenswrapper[4747]: I1215 06:27:49.022875 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da216c79-08a7-425e-9843-e716a087d989-utilities\") pod \"certified-operators-wspc8\" (UID: \"da216c79-08a7-425e-9843-e716a087d989\") " pod="openshift-marketplace/certified-operators-wspc8" Dec 15 06:27:49 crc kubenswrapper[4747]: I1215 06:27:49.045042 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwdxl\" (UniqueName: \"kubernetes.io/projected/da216c79-08a7-425e-9843-e716a087d989-kube-api-access-mwdxl\") pod \"certified-operators-wspc8\" (UID: \"da216c79-08a7-425e-9843-e716a087d989\") " pod="openshift-marketplace/certified-operators-wspc8" Dec 15 06:27:49 crc kubenswrapper[4747]: I1215 06:27:49.200231 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wspc8" Dec 15 06:27:49 crc kubenswrapper[4747]: I1215 06:27:49.692131 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wspc8"] Dec 15 06:27:50 crc kubenswrapper[4747]: I1215 06:27:50.390751 4747 generic.go:334] "Generic (PLEG): container finished" podID="da216c79-08a7-425e-9843-e716a087d989" containerID="07fe45e39c7de7440abddc09b67c6291f4013942c4b1e4791936ccd5cb55e512" exitCode=0 Dec 15 06:27:50 crc kubenswrapper[4747]: I1215 06:27:50.390845 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wspc8" event={"ID":"da216c79-08a7-425e-9843-e716a087d989","Type":"ContainerDied","Data":"07fe45e39c7de7440abddc09b67c6291f4013942c4b1e4791936ccd5cb55e512"} Dec 15 06:27:50 crc kubenswrapper[4747]: I1215 06:27:50.391972 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wspc8" event={"ID":"da216c79-08a7-425e-9843-e716a087d989","Type":"ContainerStarted","Data":"28ae98be6bc59687210416bf791664e8c3356bb80c5122fdd9cc0d9cc353428d"} Dec 15 06:27:51 crc kubenswrapper[4747]: I1215 06:27:51.402436 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wspc8" event={"ID":"da216c79-08a7-425e-9843-e716a087d989","Type":"ContainerStarted","Data":"70d0d871344c2669a54025143e606310cded5865167c8532c0ca64ba125e08fb"} Dec 15 06:27:52 crc kubenswrapper[4747]: I1215 06:27:52.414668 4747 generic.go:334] "Generic (PLEG): container finished" podID="da216c79-08a7-425e-9843-e716a087d989" containerID="70d0d871344c2669a54025143e606310cded5865167c8532c0ca64ba125e08fb" exitCode=0 Dec 15 06:27:52 crc kubenswrapper[4747]: I1215 06:27:52.414796 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wspc8" event={"ID":"da216c79-08a7-425e-9843-e716a087d989","Type":"ContainerDied","Data":"70d0d871344c2669a54025143e606310cded5865167c8532c0ca64ba125e08fb"} Dec 15 06:27:53 crc kubenswrapper[4747]: I1215 06:27:53.427860 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wspc8" event={"ID":"da216c79-08a7-425e-9843-e716a087d989","Type":"ContainerStarted","Data":"c160396fc0a167e15ed31bdff04277763762bda7f021754eef8fb5e37ef5ec2f"} Dec 15 06:27:53 crc kubenswrapper[4747]: I1215 06:27:53.449646 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wspc8" podStartSLOduration=2.955337403 podStartE2EDuration="5.449628193s" podCreationTimestamp="2025-12-15 06:27:48 +0000 UTC" firstStartedPulling="2025-12-15 06:27:50.39290001 +0000 UTC m=+3034.089411928" lastFinishedPulling="2025-12-15 06:27:52.887190801 +0000 UTC m=+3036.583702718" observedRunningTime="2025-12-15 06:27:53.447271362 +0000 UTC m=+3037.143783269" watchObservedRunningTime="2025-12-15 06:27:53.449628193 +0000 UTC m=+3037.146140110" Dec 15 06:27:59 crc kubenswrapper[4747]: I1215 06:27:59.201131 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wspc8" Dec 15 06:27:59 crc kubenswrapper[4747]: I1215 06:27:59.202788 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wspc8" Dec 15 06:27:59 crc kubenswrapper[4747]: I1215 06:27:59.243359 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wspc8" Dec 15 06:27:59 crc kubenswrapper[4747]: I1215 06:27:59.531587 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wspc8" Dec 15 06:27:59 crc kubenswrapper[4747]: I1215 06:27:59.571805 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wspc8"] Dec 15 06:28:01 crc kubenswrapper[4747]: I1215 06:28:01.513223 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wspc8" podUID="da216c79-08a7-425e-9843-e716a087d989" containerName="registry-server" containerID="cri-o://c160396fc0a167e15ed31bdff04277763762bda7f021754eef8fb5e37ef5ec2f" gracePeriod=2 Dec 15 06:28:01 crc kubenswrapper[4747]: I1215 06:28:01.904579 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wspc8" Dec 15 06:28:02 crc kubenswrapper[4747]: I1215 06:28:02.103942 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da216c79-08a7-425e-9843-e716a087d989-utilities\") pod \"da216c79-08a7-425e-9843-e716a087d989\" (UID: \"da216c79-08a7-425e-9843-e716a087d989\") " Dec 15 06:28:02 crc kubenswrapper[4747]: I1215 06:28:02.104131 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwdxl\" (UniqueName: \"kubernetes.io/projected/da216c79-08a7-425e-9843-e716a087d989-kube-api-access-mwdxl\") pod \"da216c79-08a7-425e-9843-e716a087d989\" (UID: \"da216c79-08a7-425e-9843-e716a087d989\") " Dec 15 06:28:02 crc kubenswrapper[4747]: I1215 06:28:02.104295 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da216c79-08a7-425e-9843-e716a087d989-catalog-content\") pod \"da216c79-08a7-425e-9843-e716a087d989\" (UID: \"da216c79-08a7-425e-9843-e716a087d989\") " Dec 15 06:28:02 crc kubenswrapper[4747]: I1215 06:28:02.104763 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da216c79-08a7-425e-9843-e716a087d989-utilities" (OuterVolumeSpecName: "utilities") pod "da216c79-08a7-425e-9843-e716a087d989" (UID: "da216c79-08a7-425e-9843-e716a087d989"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 06:28:02 crc kubenswrapper[4747]: I1215 06:28:02.110868 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da216c79-08a7-425e-9843-e716a087d989-kube-api-access-mwdxl" (OuterVolumeSpecName: "kube-api-access-mwdxl") pod "da216c79-08a7-425e-9843-e716a087d989" (UID: "da216c79-08a7-425e-9843-e716a087d989"). InnerVolumeSpecName "kube-api-access-mwdxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 06:28:02 crc kubenswrapper[4747]: I1215 06:28:02.146273 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da216c79-08a7-425e-9843-e716a087d989-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "da216c79-08a7-425e-9843-e716a087d989" (UID: "da216c79-08a7-425e-9843-e716a087d989"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 06:28:02 crc kubenswrapper[4747]: I1215 06:28:02.208143 4747 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da216c79-08a7-425e-9843-e716a087d989-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 15 06:28:02 crc kubenswrapper[4747]: I1215 06:28:02.208180 4747 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da216c79-08a7-425e-9843-e716a087d989-utilities\") on node \"crc\" DevicePath \"\"" Dec 15 06:28:02 crc kubenswrapper[4747]: I1215 06:28:02.208199 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwdxl\" (UniqueName: \"kubernetes.io/projected/da216c79-08a7-425e-9843-e716a087d989-kube-api-access-mwdxl\") on node \"crc\" DevicePath \"\"" Dec 15 06:28:02 crc kubenswrapper[4747]: I1215 06:28:02.524442 4747 generic.go:334] "Generic (PLEG): container finished" podID="da216c79-08a7-425e-9843-e716a087d989" containerID="c160396fc0a167e15ed31bdff04277763762bda7f021754eef8fb5e37ef5ec2f" exitCode=0 Dec 15 06:28:02 crc kubenswrapper[4747]: I1215 06:28:02.524508 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wspc8" event={"ID":"da216c79-08a7-425e-9843-e716a087d989","Type":"ContainerDied","Data":"c160396fc0a167e15ed31bdff04277763762bda7f021754eef8fb5e37ef5ec2f"} Dec 15 06:28:02 crc kubenswrapper[4747]: I1215 06:28:02.524523 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wspc8" Dec 15 06:28:02 crc kubenswrapper[4747]: I1215 06:28:02.524552 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wspc8" event={"ID":"da216c79-08a7-425e-9843-e716a087d989","Type":"ContainerDied","Data":"28ae98be6bc59687210416bf791664e8c3356bb80c5122fdd9cc0d9cc353428d"} Dec 15 06:28:02 crc kubenswrapper[4747]: I1215 06:28:02.524575 4747 scope.go:117] "RemoveContainer" containerID="c160396fc0a167e15ed31bdff04277763762bda7f021754eef8fb5e37ef5ec2f" Dec 15 06:28:02 crc kubenswrapper[4747]: I1215 06:28:02.556247 4747 scope.go:117] "RemoveContainer" containerID="70d0d871344c2669a54025143e606310cded5865167c8532c0ca64ba125e08fb" Dec 15 06:28:02 crc kubenswrapper[4747]: I1215 06:28:02.559223 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wspc8"] Dec 15 06:28:02 crc kubenswrapper[4747]: I1215 06:28:02.567436 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wspc8"] Dec 15 06:28:02 crc kubenswrapper[4747]: I1215 06:28:02.575447 4747 scope.go:117] "RemoveContainer" containerID="07fe45e39c7de7440abddc09b67c6291f4013942c4b1e4791936ccd5cb55e512" Dec 15 06:28:02 crc kubenswrapper[4747]: I1215 06:28:02.615048 4747 scope.go:117] "RemoveContainer" containerID="c160396fc0a167e15ed31bdff04277763762bda7f021754eef8fb5e37ef5ec2f" Dec 15 06:28:02 crc kubenswrapper[4747]: E1215 06:28:02.617059 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c160396fc0a167e15ed31bdff04277763762bda7f021754eef8fb5e37ef5ec2f\": container with ID starting with c160396fc0a167e15ed31bdff04277763762bda7f021754eef8fb5e37ef5ec2f not found: ID does not exist" containerID="c160396fc0a167e15ed31bdff04277763762bda7f021754eef8fb5e37ef5ec2f" Dec 15 06:28:02 crc kubenswrapper[4747]: I1215 06:28:02.617105 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c160396fc0a167e15ed31bdff04277763762bda7f021754eef8fb5e37ef5ec2f"} err="failed to get container status \"c160396fc0a167e15ed31bdff04277763762bda7f021754eef8fb5e37ef5ec2f\": rpc error: code = NotFound desc = could not find container \"c160396fc0a167e15ed31bdff04277763762bda7f021754eef8fb5e37ef5ec2f\": container with ID starting with c160396fc0a167e15ed31bdff04277763762bda7f021754eef8fb5e37ef5ec2f not found: ID does not exist" Dec 15 06:28:02 crc kubenswrapper[4747]: I1215 06:28:02.617131 4747 scope.go:117] "RemoveContainer" containerID="70d0d871344c2669a54025143e606310cded5865167c8532c0ca64ba125e08fb" Dec 15 06:28:02 crc kubenswrapper[4747]: E1215 06:28:02.617572 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70d0d871344c2669a54025143e606310cded5865167c8532c0ca64ba125e08fb\": container with ID starting with 70d0d871344c2669a54025143e606310cded5865167c8532c0ca64ba125e08fb not found: ID does not exist" containerID="70d0d871344c2669a54025143e606310cded5865167c8532c0ca64ba125e08fb" Dec 15 06:28:02 crc kubenswrapper[4747]: I1215 06:28:02.617615 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70d0d871344c2669a54025143e606310cded5865167c8532c0ca64ba125e08fb"} err="failed to get container status \"70d0d871344c2669a54025143e606310cded5865167c8532c0ca64ba125e08fb\": rpc error: code = NotFound desc = could not find container \"70d0d871344c2669a54025143e606310cded5865167c8532c0ca64ba125e08fb\": container with ID starting with 70d0d871344c2669a54025143e606310cded5865167c8532c0ca64ba125e08fb not found: ID does not exist" Dec 15 06:28:02 crc kubenswrapper[4747]: I1215 06:28:02.617631 4747 scope.go:117] "RemoveContainer" containerID="07fe45e39c7de7440abddc09b67c6291f4013942c4b1e4791936ccd5cb55e512" Dec 15 06:28:02 crc kubenswrapper[4747]: E1215 06:28:02.617958 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07fe45e39c7de7440abddc09b67c6291f4013942c4b1e4791936ccd5cb55e512\": container with ID starting with 07fe45e39c7de7440abddc09b67c6291f4013942c4b1e4791936ccd5cb55e512 not found: ID does not exist" containerID="07fe45e39c7de7440abddc09b67c6291f4013942c4b1e4791936ccd5cb55e512" Dec 15 06:28:02 crc kubenswrapper[4747]: I1215 06:28:02.617984 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07fe45e39c7de7440abddc09b67c6291f4013942c4b1e4791936ccd5cb55e512"} err="failed to get container status \"07fe45e39c7de7440abddc09b67c6291f4013942c4b1e4791936ccd5cb55e512\": rpc error: code = NotFound desc = could not find container \"07fe45e39c7de7440abddc09b67c6291f4013942c4b1e4791936ccd5cb55e512\": container with ID starting with 07fe45e39c7de7440abddc09b67c6291f4013942c4b1e4791936ccd5cb55e512 not found: ID does not exist" Dec 15 06:28:02 crc kubenswrapper[4747]: I1215 06:28:02.638567 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da216c79-08a7-425e-9843-e716a087d989" path="/var/lib/kubelet/pods/da216c79-08a7-425e-9843-e716a087d989/volumes" Dec 15 06:28:28 crc kubenswrapper[4747]: I1215 06:28:28.865267 4747 patch_prober.go:28] interesting pod/machine-config-daemon-nldtn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 15 06:28:28 crc kubenswrapper[4747]: I1215 06:28:28.865994 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 15 06:28:58 crc kubenswrapper[4747]: I1215 06:28:58.865783 4747 patch_prober.go:28] interesting pod/machine-config-daemon-nldtn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 15 06:28:58 crc kubenswrapper[4747]: I1215 06:28:58.866332 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 15 06:29:28 crc kubenswrapper[4747]: I1215 06:29:28.866043 4747 patch_prober.go:28] interesting pod/machine-config-daemon-nldtn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 15 06:29:28 crc kubenswrapper[4747]: I1215 06:29:28.866736 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 15 06:29:28 crc kubenswrapper[4747]: I1215 06:29:28.866797 4747 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" Dec 15 06:29:28 crc kubenswrapper[4747]: I1215 06:29:28.867528 4747 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"44cd12cf9c71b73bd3a2782f4e6fd9beb9278b65a8ace12d37e33f7b4a372d97"} pod="openshift-machine-config-operator/machine-config-daemon-nldtn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 15 06:29:28 crc kubenswrapper[4747]: I1215 06:29:28.867585 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" containerID="cri-o://44cd12cf9c71b73bd3a2782f4e6fd9beb9278b65a8ace12d37e33f7b4a372d97" gracePeriod=600 Dec 15 06:29:28 crc kubenswrapper[4747]: E1215 06:29:28.995585 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:29:29 crc kubenswrapper[4747]: I1215 06:29:29.361539 4747 generic.go:334] "Generic (PLEG): container finished" podID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerID="44cd12cf9c71b73bd3a2782f4e6fd9beb9278b65a8ace12d37e33f7b4a372d97" exitCode=0 Dec 15 06:29:29 crc kubenswrapper[4747]: I1215 06:29:29.361619 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" event={"ID":"1d50e5c9-7ce9-40c0-b942-01031654d27c","Type":"ContainerDied","Data":"44cd12cf9c71b73bd3a2782f4e6fd9beb9278b65a8ace12d37e33f7b4a372d97"} Dec 15 06:29:29 crc kubenswrapper[4747]: I1215 06:29:29.361716 4747 scope.go:117] "RemoveContainer" containerID="60fd1ac4d9c113facb799066aa46580b651f7fcb364cd5b366815b705fbd1cde" Dec 15 06:29:29 crc kubenswrapper[4747]: I1215 06:29:29.362474 4747 scope.go:117] "RemoveContainer" containerID="44cd12cf9c71b73bd3a2782f4e6fd9beb9278b65a8ace12d37e33f7b4a372d97" Dec 15 06:29:29 crc kubenswrapper[4747]: E1215 06:29:29.362910 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:29:40 crc kubenswrapper[4747]: I1215 06:29:40.629760 4747 scope.go:117] "RemoveContainer" containerID="44cd12cf9c71b73bd3a2782f4e6fd9beb9278b65a8ace12d37e33f7b4a372d97" Dec 15 06:29:40 crc kubenswrapper[4747]: E1215 06:29:40.630900 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:29:52 crc kubenswrapper[4747]: I1215 06:29:52.630321 4747 scope.go:117] "RemoveContainer" containerID="44cd12cf9c71b73bd3a2782f4e6fd9beb9278b65a8ace12d37e33f7b4a372d97" Dec 15 06:29:52 crc kubenswrapper[4747]: E1215 06:29:52.631582 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:30:00 crc kubenswrapper[4747]: I1215 06:30:00.145713 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29429670-gdnxx"] Dec 15 06:30:00 crc kubenswrapper[4747]: E1215 06:30:00.146604 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da216c79-08a7-425e-9843-e716a087d989" containerName="extract-utilities" Dec 15 06:30:00 crc kubenswrapper[4747]: I1215 06:30:00.146620 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="da216c79-08a7-425e-9843-e716a087d989" containerName="extract-utilities" Dec 15 06:30:00 crc kubenswrapper[4747]: E1215 06:30:00.146668 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da216c79-08a7-425e-9843-e716a087d989" containerName="registry-server" Dec 15 06:30:00 crc kubenswrapper[4747]: I1215 06:30:00.146676 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="da216c79-08a7-425e-9843-e716a087d989" containerName="registry-server" Dec 15 06:30:00 crc kubenswrapper[4747]: E1215 06:30:00.146689 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da216c79-08a7-425e-9843-e716a087d989" containerName="extract-content" Dec 15 06:30:00 crc kubenswrapper[4747]: I1215 06:30:00.146695 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="da216c79-08a7-425e-9843-e716a087d989" containerName="extract-content" Dec 15 06:30:00 crc kubenswrapper[4747]: I1215 06:30:00.146892 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="da216c79-08a7-425e-9843-e716a087d989" containerName="registry-server" Dec 15 06:30:00 crc kubenswrapper[4747]: I1215 06:30:00.147552 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29429670-gdnxx" Dec 15 06:30:00 crc kubenswrapper[4747]: I1215 06:30:00.149820 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 15 06:30:00 crc kubenswrapper[4747]: I1215 06:30:00.150852 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 15 06:30:00 crc kubenswrapper[4747]: I1215 06:30:00.155381 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29429670-gdnxx"] Dec 15 06:30:00 crc kubenswrapper[4747]: I1215 06:30:00.177654 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/147beecc-48fb-4593-9bb7-a8f01e59beee-secret-volume\") pod \"collect-profiles-29429670-gdnxx\" (UID: \"147beecc-48fb-4593-9bb7-a8f01e59beee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29429670-gdnxx" Dec 15 06:30:00 crc kubenswrapper[4747]: I1215 06:30:00.177916 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9xbd\" (UniqueName: \"kubernetes.io/projected/147beecc-48fb-4593-9bb7-a8f01e59beee-kube-api-access-v9xbd\") pod \"collect-profiles-29429670-gdnxx\" (UID: \"147beecc-48fb-4593-9bb7-a8f01e59beee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29429670-gdnxx" Dec 15 06:30:00 crc kubenswrapper[4747]: I1215 06:30:00.178077 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/147beecc-48fb-4593-9bb7-a8f01e59beee-config-volume\") pod \"collect-profiles-29429670-gdnxx\" (UID: \"147beecc-48fb-4593-9bb7-a8f01e59beee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29429670-gdnxx" Dec 15 06:30:00 crc kubenswrapper[4747]: I1215 06:30:00.279523 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9xbd\" (UniqueName: \"kubernetes.io/projected/147beecc-48fb-4593-9bb7-a8f01e59beee-kube-api-access-v9xbd\") pod \"collect-profiles-29429670-gdnxx\" (UID: \"147beecc-48fb-4593-9bb7-a8f01e59beee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29429670-gdnxx" Dec 15 06:30:00 crc kubenswrapper[4747]: I1215 06:30:00.279675 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/147beecc-48fb-4593-9bb7-a8f01e59beee-config-volume\") pod \"collect-profiles-29429670-gdnxx\" (UID: \"147beecc-48fb-4593-9bb7-a8f01e59beee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29429670-gdnxx" Dec 15 06:30:00 crc kubenswrapper[4747]: I1215 06:30:00.279790 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/147beecc-48fb-4593-9bb7-a8f01e59beee-secret-volume\") pod \"collect-profiles-29429670-gdnxx\" (UID: \"147beecc-48fb-4593-9bb7-a8f01e59beee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29429670-gdnxx" Dec 15 06:30:00 crc kubenswrapper[4747]: I1215 06:30:00.280592 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/147beecc-48fb-4593-9bb7-a8f01e59beee-config-volume\") pod \"collect-profiles-29429670-gdnxx\" (UID: \"147beecc-48fb-4593-9bb7-a8f01e59beee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29429670-gdnxx" Dec 15 06:30:00 crc kubenswrapper[4747]: I1215 06:30:00.286909 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/147beecc-48fb-4593-9bb7-a8f01e59beee-secret-volume\") pod \"collect-profiles-29429670-gdnxx\" (UID: \"147beecc-48fb-4593-9bb7-a8f01e59beee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29429670-gdnxx" Dec 15 06:30:00 crc kubenswrapper[4747]: I1215 06:30:00.295662 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9xbd\" (UniqueName: \"kubernetes.io/projected/147beecc-48fb-4593-9bb7-a8f01e59beee-kube-api-access-v9xbd\") pod \"collect-profiles-29429670-gdnxx\" (UID: \"147beecc-48fb-4593-9bb7-a8f01e59beee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29429670-gdnxx" Dec 15 06:30:00 crc kubenswrapper[4747]: I1215 06:30:00.469334 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29429670-gdnxx" Dec 15 06:30:00 crc kubenswrapper[4747]: I1215 06:30:00.890478 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29429670-gdnxx"] Dec 15 06:30:01 crc kubenswrapper[4747]: I1215 06:30:01.690029 4747 generic.go:334] "Generic (PLEG): container finished" podID="147beecc-48fb-4593-9bb7-a8f01e59beee" containerID="de4b0598f0f6ef1ec1556ff52691954b90c886c80a64069523553caebf440aa4" exitCode=0 Dec 15 06:30:01 crc kubenswrapper[4747]: I1215 06:30:01.690102 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29429670-gdnxx" event={"ID":"147beecc-48fb-4593-9bb7-a8f01e59beee","Type":"ContainerDied","Data":"de4b0598f0f6ef1ec1556ff52691954b90c886c80a64069523553caebf440aa4"} Dec 15 06:30:01 crc kubenswrapper[4747]: I1215 06:30:01.690395 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29429670-gdnxx" event={"ID":"147beecc-48fb-4593-9bb7-a8f01e59beee","Type":"ContainerStarted","Data":"f896eec2fe8b01efff7238302acde91f047df11d370fd68200e4ea59929e0abb"} Dec 15 06:30:02 crc kubenswrapper[4747]: I1215 06:30:02.987275 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29429670-gdnxx" Dec 15 06:30:03 crc kubenswrapper[4747]: I1215 06:30:03.148270 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9xbd\" (UniqueName: \"kubernetes.io/projected/147beecc-48fb-4593-9bb7-a8f01e59beee-kube-api-access-v9xbd\") pod \"147beecc-48fb-4593-9bb7-a8f01e59beee\" (UID: \"147beecc-48fb-4593-9bb7-a8f01e59beee\") " Dec 15 06:30:03 crc kubenswrapper[4747]: I1215 06:30:03.148770 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/147beecc-48fb-4593-9bb7-a8f01e59beee-secret-volume\") pod \"147beecc-48fb-4593-9bb7-a8f01e59beee\" (UID: \"147beecc-48fb-4593-9bb7-a8f01e59beee\") " Dec 15 06:30:03 crc kubenswrapper[4747]: I1215 06:30:03.148830 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/147beecc-48fb-4593-9bb7-a8f01e59beee-config-volume\") pod \"147beecc-48fb-4593-9bb7-a8f01e59beee\" (UID: \"147beecc-48fb-4593-9bb7-a8f01e59beee\") " Dec 15 06:30:03 crc kubenswrapper[4747]: I1215 06:30:03.149683 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/147beecc-48fb-4593-9bb7-a8f01e59beee-config-volume" (OuterVolumeSpecName: "config-volume") pod "147beecc-48fb-4593-9bb7-a8f01e59beee" (UID: "147beecc-48fb-4593-9bb7-a8f01e59beee"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 06:30:03 crc kubenswrapper[4747]: I1215 06:30:03.167455 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/147beecc-48fb-4593-9bb7-a8f01e59beee-kube-api-access-v9xbd" (OuterVolumeSpecName: "kube-api-access-v9xbd") pod "147beecc-48fb-4593-9bb7-a8f01e59beee" (UID: "147beecc-48fb-4593-9bb7-a8f01e59beee"). InnerVolumeSpecName "kube-api-access-v9xbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 06:30:03 crc kubenswrapper[4747]: I1215 06:30:03.167616 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/147beecc-48fb-4593-9bb7-a8f01e59beee-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "147beecc-48fb-4593-9bb7-a8f01e59beee" (UID: "147beecc-48fb-4593-9bb7-a8f01e59beee"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:30:03 crc kubenswrapper[4747]: I1215 06:30:03.251268 4747 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/147beecc-48fb-4593-9bb7-a8f01e59beee-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 15 06:30:03 crc kubenswrapper[4747]: I1215 06:30:03.251303 4747 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/147beecc-48fb-4593-9bb7-a8f01e59beee-config-volume\") on node \"crc\" DevicePath \"\"" Dec 15 06:30:03 crc kubenswrapper[4747]: I1215 06:30:03.251315 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9xbd\" (UniqueName: \"kubernetes.io/projected/147beecc-48fb-4593-9bb7-a8f01e59beee-kube-api-access-v9xbd\") on node \"crc\" DevicePath \"\"" Dec 15 06:30:03 crc kubenswrapper[4747]: I1215 06:30:03.710352 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29429670-gdnxx" event={"ID":"147beecc-48fb-4593-9bb7-a8f01e59beee","Type":"ContainerDied","Data":"f896eec2fe8b01efff7238302acde91f047df11d370fd68200e4ea59929e0abb"} Dec 15 06:30:03 crc kubenswrapper[4747]: I1215 06:30:03.710398 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29429670-gdnxx" Dec 15 06:30:03 crc kubenswrapper[4747]: I1215 06:30:03.710408 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f896eec2fe8b01efff7238302acde91f047df11d370fd68200e4ea59929e0abb" Dec 15 06:30:04 crc kubenswrapper[4747]: I1215 06:30:04.053762 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29429625-64ffh"] Dec 15 06:30:04 crc kubenswrapper[4747]: I1215 06:30:04.061444 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29429625-64ffh"] Dec 15 06:30:04 crc kubenswrapper[4747]: I1215 06:30:04.640618 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3dab2e78-e203-4f2e-9b13-a42f800038f2" path="/var/lib/kubelet/pods/3dab2e78-e203-4f2e-9b13-a42f800038f2/volumes" Dec 15 06:30:07 crc kubenswrapper[4747]: I1215 06:30:07.629620 4747 scope.go:117] "RemoveContainer" containerID="44cd12cf9c71b73bd3a2782f4e6fd9beb9278b65a8ace12d37e33f7b4a372d97" Dec 15 06:30:07 crc kubenswrapper[4747]: E1215 06:30:07.630379 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:30:21 crc kubenswrapper[4747]: I1215 06:30:21.630612 4747 scope.go:117] "RemoveContainer" containerID="44cd12cf9c71b73bd3a2782f4e6fd9beb9278b65a8ace12d37e33f7b4a372d97" Dec 15 06:30:21 crc kubenswrapper[4747]: E1215 06:30:21.631840 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:30:33 crc kubenswrapper[4747]: I1215 06:30:33.944313 4747 scope.go:117] "RemoveContainer" containerID="249493c95bd35a370396abe1211fa33a94d8774a9c8b72abf99d7c8f9cd4aa14" Dec 15 06:30:35 crc kubenswrapper[4747]: I1215 06:30:35.628986 4747 scope.go:117] "RemoveContainer" containerID="44cd12cf9c71b73bd3a2782f4e6fd9beb9278b65a8ace12d37e33f7b4a372d97" Dec 15 06:30:35 crc kubenswrapper[4747]: E1215 06:30:35.629802 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:30:46 crc kubenswrapper[4747]: I1215 06:30:46.634168 4747 scope.go:117] "RemoveContainer" containerID="44cd12cf9c71b73bd3a2782f4e6fd9beb9278b65a8ace12d37e33f7b4a372d97" Dec 15 06:30:46 crc kubenswrapper[4747]: E1215 06:30:46.635251 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:30:59 crc kubenswrapper[4747]: I1215 06:30:59.221277 4747 generic.go:334] "Generic (PLEG): container finished" podID="0feaf663-b187-479f-8129-5aa6bf3b9047" containerID="c651b9f9b46b04dc8f22725cc433a185fffb5dc7c7926f207814eedd8a92eb3e" exitCode=0 Dec 15 06:30:59 crc kubenswrapper[4747]: I1215 06:30:59.221323 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"0feaf663-b187-479f-8129-5aa6bf3b9047","Type":"ContainerDied","Data":"c651b9f9b46b04dc8f22725cc433a185fffb5dc7c7926f207814eedd8a92eb3e"} Dec 15 06:31:00 crc kubenswrapper[4747]: I1215 06:31:00.512189 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 15 06:31:00 crc kubenswrapper[4747]: I1215 06:31:00.674500 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"0feaf663-b187-479f-8129-5aa6bf3b9047\" (UID: \"0feaf663-b187-479f-8129-5aa6bf3b9047\") " Dec 15 06:31:00 crc kubenswrapper[4747]: I1215 06:31:00.674988 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0feaf663-b187-479f-8129-5aa6bf3b9047-ssh-key\") pod \"0feaf663-b187-479f-8129-5aa6bf3b9047\" (UID: \"0feaf663-b187-479f-8129-5aa6bf3b9047\") " Dec 15 06:31:00 crc kubenswrapper[4747]: I1215 06:31:00.675075 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0feaf663-b187-479f-8129-5aa6bf3b9047-config-data\") pod \"0feaf663-b187-479f-8129-5aa6bf3b9047\" (UID: \"0feaf663-b187-479f-8129-5aa6bf3b9047\") " Dec 15 06:31:00 crc kubenswrapper[4747]: I1215 06:31:00.675134 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0feaf663-b187-479f-8129-5aa6bf3b9047-openstack-config\") pod \"0feaf663-b187-479f-8129-5aa6bf3b9047\" (UID: \"0feaf663-b187-479f-8129-5aa6bf3b9047\") " Dec 15 06:31:00 crc kubenswrapper[4747]: I1215 06:31:00.675198 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0feaf663-b187-479f-8129-5aa6bf3b9047-test-operator-ephemeral-workdir\") pod \"0feaf663-b187-479f-8129-5aa6bf3b9047\" (UID: \"0feaf663-b187-479f-8129-5aa6bf3b9047\") " Dec 15 06:31:00 crc kubenswrapper[4747]: I1215 06:31:00.675358 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0feaf663-b187-479f-8129-5aa6bf3b9047-openstack-config-secret\") pod \"0feaf663-b187-479f-8129-5aa6bf3b9047\" (UID: \"0feaf663-b187-479f-8129-5aa6bf3b9047\") " Dec 15 06:31:00 crc kubenswrapper[4747]: I1215 06:31:00.675403 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nk7v2\" (UniqueName: \"kubernetes.io/projected/0feaf663-b187-479f-8129-5aa6bf3b9047-kube-api-access-nk7v2\") pod \"0feaf663-b187-479f-8129-5aa6bf3b9047\" (UID: \"0feaf663-b187-479f-8129-5aa6bf3b9047\") " Dec 15 06:31:00 crc kubenswrapper[4747]: I1215 06:31:00.675453 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0feaf663-b187-479f-8129-5aa6bf3b9047-ca-certs\") pod \"0feaf663-b187-479f-8129-5aa6bf3b9047\" (UID: \"0feaf663-b187-479f-8129-5aa6bf3b9047\") " Dec 15 06:31:00 crc kubenswrapper[4747]: I1215 06:31:00.675497 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0feaf663-b187-479f-8129-5aa6bf3b9047-test-operator-ephemeral-temporary\") pod \"0feaf663-b187-479f-8129-5aa6bf3b9047\" (UID: \"0feaf663-b187-479f-8129-5aa6bf3b9047\") " Dec 15 06:31:00 crc kubenswrapper[4747]: I1215 06:31:00.676455 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0feaf663-b187-479f-8129-5aa6bf3b9047-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "0feaf663-b187-479f-8129-5aa6bf3b9047" (UID: "0feaf663-b187-479f-8129-5aa6bf3b9047"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 06:31:00 crc kubenswrapper[4747]: I1215 06:31:00.676451 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0feaf663-b187-479f-8129-5aa6bf3b9047-config-data" (OuterVolumeSpecName: "config-data") pod "0feaf663-b187-479f-8129-5aa6bf3b9047" (UID: "0feaf663-b187-479f-8129-5aa6bf3b9047"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 06:31:00 crc kubenswrapper[4747]: I1215 06:31:00.678439 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0feaf663-b187-479f-8129-5aa6bf3b9047-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "0feaf663-b187-479f-8129-5aa6bf3b9047" (UID: "0feaf663-b187-479f-8129-5aa6bf3b9047"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 06:31:00 crc kubenswrapper[4747]: I1215 06:31:00.681690 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0feaf663-b187-479f-8129-5aa6bf3b9047-kube-api-access-nk7v2" (OuterVolumeSpecName: "kube-api-access-nk7v2") pod "0feaf663-b187-479f-8129-5aa6bf3b9047" (UID: "0feaf663-b187-479f-8129-5aa6bf3b9047"). InnerVolumeSpecName "kube-api-access-nk7v2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 06:31:00 crc kubenswrapper[4747]: I1215 06:31:00.681889 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "test-operator-logs") pod "0feaf663-b187-479f-8129-5aa6bf3b9047" (UID: "0feaf663-b187-479f-8129-5aa6bf3b9047"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 15 06:31:00 crc kubenswrapper[4747]: I1215 06:31:00.701352 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0feaf663-b187-479f-8129-5aa6bf3b9047-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0feaf663-b187-479f-8129-5aa6bf3b9047" (UID: "0feaf663-b187-479f-8129-5aa6bf3b9047"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:31:00 crc kubenswrapper[4747]: I1215 06:31:00.701553 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0feaf663-b187-479f-8129-5aa6bf3b9047-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "0feaf663-b187-479f-8129-5aa6bf3b9047" (UID: "0feaf663-b187-479f-8129-5aa6bf3b9047"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:31:00 crc kubenswrapper[4747]: I1215 06:31:00.703573 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0feaf663-b187-479f-8129-5aa6bf3b9047-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "0feaf663-b187-479f-8129-5aa6bf3b9047" (UID: "0feaf663-b187-479f-8129-5aa6bf3b9047"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 15 06:31:00 crc kubenswrapper[4747]: I1215 06:31:00.716820 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0feaf663-b187-479f-8129-5aa6bf3b9047-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "0feaf663-b187-479f-8129-5aa6bf3b9047" (UID: "0feaf663-b187-479f-8129-5aa6bf3b9047"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 15 06:31:00 crc kubenswrapper[4747]: I1215 06:31:00.779136 4747 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0feaf663-b187-479f-8129-5aa6bf3b9047-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Dec 15 06:31:00 crc kubenswrapper[4747]: I1215 06:31:00.779225 4747 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Dec 15 06:31:00 crc kubenswrapper[4747]: I1215 06:31:00.779241 4747 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0feaf663-b187-479f-8129-5aa6bf3b9047-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 15 06:31:00 crc kubenswrapper[4747]: I1215 06:31:00.779252 4747 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0feaf663-b187-479f-8129-5aa6bf3b9047-config-data\") on node \"crc\" DevicePath \"\"" Dec 15 06:31:00 crc kubenswrapper[4747]: I1215 06:31:00.779265 4747 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0feaf663-b187-479f-8129-5aa6bf3b9047-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 15 06:31:00 crc kubenswrapper[4747]: I1215 06:31:00.779280 4747 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0feaf663-b187-479f-8129-5aa6bf3b9047-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Dec 15 06:31:00 crc kubenswrapper[4747]: I1215 06:31:00.779290 4747 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0feaf663-b187-479f-8129-5aa6bf3b9047-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 15 06:31:00 crc kubenswrapper[4747]: I1215 06:31:00.779300 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nk7v2\" (UniqueName: \"kubernetes.io/projected/0feaf663-b187-479f-8129-5aa6bf3b9047-kube-api-access-nk7v2\") on node \"crc\" DevicePath \"\"" Dec 15 06:31:00 crc kubenswrapper[4747]: I1215 06:31:00.779311 4747 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0feaf663-b187-479f-8129-5aa6bf3b9047-ca-certs\") on node \"crc\" DevicePath \"\"" Dec 15 06:31:00 crc kubenswrapper[4747]: I1215 06:31:00.796496 4747 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Dec 15 06:31:00 crc kubenswrapper[4747]: I1215 06:31:00.881847 4747 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Dec 15 06:31:01 crc kubenswrapper[4747]: I1215 06:31:01.243506 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"0feaf663-b187-479f-8129-5aa6bf3b9047","Type":"ContainerDied","Data":"c6e509740af1d4331af655bdf15357ef739ac1c5e40b9056664ed3e99cb15c46"} Dec 15 06:31:01 crc kubenswrapper[4747]: I1215 06:31:01.243570 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6e509740af1d4331af655bdf15357ef739ac1c5e40b9056664ed3e99cb15c46" Dec 15 06:31:01 crc kubenswrapper[4747]: I1215 06:31:01.243579 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 15 06:31:01 crc kubenswrapper[4747]: I1215 06:31:01.629519 4747 scope.go:117] "RemoveContainer" containerID="44cd12cf9c71b73bd3a2782f4e6fd9beb9278b65a8ace12d37e33f7b4a372d97" Dec 15 06:31:01 crc kubenswrapper[4747]: E1215 06:31:01.630019 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:31:03 crc kubenswrapper[4747]: I1215 06:31:03.404017 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 15 06:31:03 crc kubenswrapper[4747]: E1215 06:31:03.404979 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="147beecc-48fb-4593-9bb7-a8f01e59beee" containerName="collect-profiles" Dec 15 06:31:03 crc kubenswrapper[4747]: I1215 06:31:03.404995 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="147beecc-48fb-4593-9bb7-a8f01e59beee" containerName="collect-profiles" Dec 15 06:31:03 crc kubenswrapper[4747]: E1215 06:31:03.405012 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0feaf663-b187-479f-8129-5aa6bf3b9047" containerName="tempest-tests-tempest-tests-runner" Dec 15 06:31:03 crc kubenswrapper[4747]: I1215 06:31:03.405017 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="0feaf663-b187-479f-8129-5aa6bf3b9047" containerName="tempest-tests-tempest-tests-runner" Dec 15 06:31:03 crc kubenswrapper[4747]: I1215 06:31:03.405266 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="0feaf663-b187-479f-8129-5aa6bf3b9047" containerName="tempest-tests-tempest-tests-runner" Dec 15 06:31:03 crc kubenswrapper[4747]: I1215 06:31:03.405280 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="147beecc-48fb-4593-9bb7-a8f01e59beee" containerName="collect-profiles" Dec 15 06:31:03 crc kubenswrapper[4747]: I1215 06:31:03.406037 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 15 06:31:03 crc kubenswrapper[4747]: I1215 06:31:03.408055 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-fdjw5" Dec 15 06:31:03 crc kubenswrapper[4747]: I1215 06:31:03.412658 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 15 06:31:03 crc kubenswrapper[4747]: I1215 06:31:03.524418 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhc5n\" (UniqueName: \"kubernetes.io/projected/b3fb43ea-8919-4b3f-bfd7-27ee6d7e8a0b-kube-api-access-jhc5n\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b3fb43ea-8919-4b3f-bfd7-27ee6d7e8a0b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 15 06:31:03 crc kubenswrapper[4747]: I1215 06:31:03.524484 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b3fb43ea-8919-4b3f-bfd7-27ee6d7e8a0b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 15 06:31:03 crc kubenswrapper[4747]: I1215 06:31:03.626487 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhc5n\" (UniqueName: \"kubernetes.io/projected/b3fb43ea-8919-4b3f-bfd7-27ee6d7e8a0b-kube-api-access-jhc5n\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b3fb43ea-8919-4b3f-bfd7-27ee6d7e8a0b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 15 06:31:03 crc kubenswrapper[4747]: I1215 06:31:03.626645 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b3fb43ea-8919-4b3f-bfd7-27ee6d7e8a0b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 15 06:31:03 crc kubenswrapper[4747]: I1215 06:31:03.627096 4747 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b3fb43ea-8919-4b3f-bfd7-27ee6d7e8a0b\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 15 06:31:03 crc kubenswrapper[4747]: I1215 06:31:03.647064 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhc5n\" (UniqueName: \"kubernetes.io/projected/b3fb43ea-8919-4b3f-bfd7-27ee6d7e8a0b-kube-api-access-jhc5n\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b3fb43ea-8919-4b3f-bfd7-27ee6d7e8a0b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 15 06:31:03 crc kubenswrapper[4747]: I1215 06:31:03.649632 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b3fb43ea-8919-4b3f-bfd7-27ee6d7e8a0b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 15 06:31:03 crc kubenswrapper[4747]: I1215 06:31:03.729066 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 15 06:31:04 crc kubenswrapper[4747]: I1215 06:31:04.123688 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 15 06:31:04 crc kubenswrapper[4747]: I1215 06:31:04.135648 4747 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 15 06:31:04 crc kubenswrapper[4747]: I1215 06:31:04.269228 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"b3fb43ea-8919-4b3f-bfd7-27ee6d7e8a0b","Type":"ContainerStarted","Data":"f34cf396fda555acc37c561d45298d00312948d748b06e13d7d94585171eda00"} Dec 15 06:31:06 crc kubenswrapper[4747]: I1215 06:31:06.287021 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"b3fb43ea-8919-4b3f-bfd7-27ee6d7e8a0b","Type":"ContainerStarted","Data":"934030bc22eaa59ab0a3de8175ae3cab94136d3e103eec6ba7a9c2d1344a0cfa"} Dec 15 06:31:06 crc kubenswrapper[4747]: I1215 06:31:06.298750 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.236363506 podStartE2EDuration="3.298736691s" podCreationTimestamp="2025-12-15 06:31:03 +0000 UTC" firstStartedPulling="2025-12-15 06:31:04.13541845 +0000 UTC m=+3227.831930366" lastFinishedPulling="2025-12-15 06:31:05.197791635 +0000 UTC m=+3228.894303551" observedRunningTime="2025-12-15 06:31:06.297439393 +0000 UTC m=+3229.993951310" watchObservedRunningTime="2025-12-15 06:31:06.298736691 +0000 UTC m=+3229.995248608" Dec 15 06:31:12 crc kubenswrapper[4747]: I1215 06:31:12.629880 4747 scope.go:117] "RemoveContainer" containerID="44cd12cf9c71b73bd3a2782f4e6fd9beb9278b65a8ace12d37e33f7b4a372d97" Dec 15 06:31:12 crc kubenswrapper[4747]: E1215 06:31:12.631209 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:31:23 crc kubenswrapper[4747]: I1215 06:31:23.793583 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-f4tdk/must-gather-m4b67"] Dec 15 06:31:23 crc kubenswrapper[4747]: I1215 06:31:23.795563 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-f4tdk/must-gather-m4b67" Dec 15 06:31:23 crc kubenswrapper[4747]: I1215 06:31:23.798164 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-f4tdk"/"default-dockercfg-5hmw6" Dec 15 06:31:23 crc kubenswrapper[4747]: I1215 06:31:23.798168 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-f4tdk"/"openshift-service-ca.crt" Dec 15 06:31:23 crc kubenswrapper[4747]: I1215 06:31:23.803068 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-f4tdk"/"kube-root-ca.crt" Dec 15 06:31:23 crc kubenswrapper[4747]: I1215 06:31:23.810023 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-f4tdk/must-gather-m4b67"] Dec 15 06:31:23 crc kubenswrapper[4747]: I1215 06:31:23.936019 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lclf6\" (UniqueName: \"kubernetes.io/projected/c1278277-ea16-4ad4-831a-0fd9a3057178-kube-api-access-lclf6\") pod \"must-gather-m4b67\" (UID: \"c1278277-ea16-4ad4-831a-0fd9a3057178\") " pod="openshift-must-gather-f4tdk/must-gather-m4b67" Dec 15 06:31:23 crc kubenswrapper[4747]: I1215 06:31:23.936231 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c1278277-ea16-4ad4-831a-0fd9a3057178-must-gather-output\") pod \"must-gather-m4b67\" (UID: \"c1278277-ea16-4ad4-831a-0fd9a3057178\") " pod="openshift-must-gather-f4tdk/must-gather-m4b67" Dec 15 06:31:24 crc kubenswrapper[4747]: I1215 06:31:24.038560 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lclf6\" (UniqueName: \"kubernetes.io/projected/c1278277-ea16-4ad4-831a-0fd9a3057178-kube-api-access-lclf6\") pod \"must-gather-m4b67\" (UID: \"c1278277-ea16-4ad4-831a-0fd9a3057178\") " pod="openshift-must-gather-f4tdk/must-gather-m4b67" Dec 15 06:31:24 crc kubenswrapper[4747]: I1215 06:31:24.038679 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c1278277-ea16-4ad4-831a-0fd9a3057178-must-gather-output\") pod \"must-gather-m4b67\" (UID: \"c1278277-ea16-4ad4-831a-0fd9a3057178\") " pod="openshift-must-gather-f4tdk/must-gather-m4b67" Dec 15 06:31:24 crc kubenswrapper[4747]: I1215 06:31:24.039388 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c1278277-ea16-4ad4-831a-0fd9a3057178-must-gather-output\") pod \"must-gather-m4b67\" (UID: \"c1278277-ea16-4ad4-831a-0fd9a3057178\") " pod="openshift-must-gather-f4tdk/must-gather-m4b67" Dec 15 06:31:24 crc kubenswrapper[4747]: I1215 06:31:24.059138 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lclf6\" (UniqueName: \"kubernetes.io/projected/c1278277-ea16-4ad4-831a-0fd9a3057178-kube-api-access-lclf6\") pod \"must-gather-m4b67\" (UID: \"c1278277-ea16-4ad4-831a-0fd9a3057178\") " pod="openshift-must-gather-f4tdk/must-gather-m4b67" Dec 15 06:31:24 crc kubenswrapper[4747]: I1215 06:31:24.112698 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-f4tdk/must-gather-m4b67" Dec 15 06:31:24 crc kubenswrapper[4747]: I1215 06:31:24.365517 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-f4tdk/must-gather-m4b67"] Dec 15 06:31:24 crc kubenswrapper[4747]: W1215 06:31:24.373055 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1278277_ea16_4ad4_831a_0fd9a3057178.slice/crio-b4a62b859a7962fb3aee5c5c7e44dcfc552bb3aec1e8beb1137918eee6bdeb7c WatchSource:0}: Error finding container b4a62b859a7962fb3aee5c5c7e44dcfc552bb3aec1e8beb1137918eee6bdeb7c: Status 404 returned error can't find the container with id b4a62b859a7962fb3aee5c5c7e44dcfc552bb3aec1e8beb1137918eee6bdeb7c Dec 15 06:31:24 crc kubenswrapper[4747]: I1215 06:31:24.441179 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-f4tdk/must-gather-m4b67" event={"ID":"c1278277-ea16-4ad4-831a-0fd9a3057178","Type":"ContainerStarted","Data":"b4a62b859a7962fb3aee5c5c7e44dcfc552bb3aec1e8beb1137918eee6bdeb7c"} Dec 15 06:31:27 crc kubenswrapper[4747]: I1215 06:31:27.629311 4747 scope.go:117] "RemoveContainer" containerID="44cd12cf9c71b73bd3a2782f4e6fd9beb9278b65a8ace12d37e33f7b4a372d97" Dec 15 06:31:27 crc kubenswrapper[4747]: E1215 06:31:27.631093 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:31:32 crc kubenswrapper[4747]: I1215 06:31:32.517485 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-f4tdk/must-gather-m4b67" event={"ID":"c1278277-ea16-4ad4-831a-0fd9a3057178","Type":"ContainerStarted","Data":"9c5c1885a697597925b2ac8d70704c1923241ffd09412569fcd4245aecc08fcb"} Dec 15 06:31:32 crc kubenswrapper[4747]: I1215 06:31:32.518572 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-f4tdk/must-gather-m4b67" event={"ID":"c1278277-ea16-4ad4-831a-0fd9a3057178","Type":"ContainerStarted","Data":"a1f8d21d74d7c7c83510229abac6651a8b1242f86d19aba11e6b5cbe6cc377f1"} Dec 15 06:31:32 crc kubenswrapper[4747]: I1215 06:31:32.541899 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-f4tdk/must-gather-m4b67" podStartSLOduration=1.8829425739999999 podStartE2EDuration="9.5418789s" podCreationTimestamp="2025-12-15 06:31:23 +0000 UTC" firstStartedPulling="2025-12-15 06:31:24.389071674 +0000 UTC m=+3248.085583590" lastFinishedPulling="2025-12-15 06:31:32.048007999 +0000 UTC m=+3255.744519916" observedRunningTime="2025-12-15 06:31:32.531755317 +0000 UTC m=+3256.228267223" watchObservedRunningTime="2025-12-15 06:31:32.5418789 +0000 UTC m=+3256.238390807" Dec 15 06:31:34 crc kubenswrapper[4747]: E1215 06:31:34.518953 4747 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.25.116:55696->192.168.25.116:34815: write tcp 192.168.25.116:55696->192.168.25.116:34815: write: broken pipe Dec 15 06:31:35 crc kubenswrapper[4747]: I1215 06:31:35.297616 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-f4tdk/crc-debug-28xsv"] Dec 15 06:31:35 crc kubenswrapper[4747]: I1215 06:31:35.299202 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-f4tdk/crc-debug-28xsv" Dec 15 06:31:35 crc kubenswrapper[4747]: I1215 06:31:35.463528 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2bxq\" (UniqueName: \"kubernetes.io/projected/4e1ee3e6-46ac-4d21-a581-8c3ae3d96476-kube-api-access-k2bxq\") pod \"crc-debug-28xsv\" (UID: \"4e1ee3e6-46ac-4d21-a581-8c3ae3d96476\") " pod="openshift-must-gather-f4tdk/crc-debug-28xsv" Dec 15 06:31:35 crc kubenswrapper[4747]: I1215 06:31:35.464059 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4e1ee3e6-46ac-4d21-a581-8c3ae3d96476-host\") pod \"crc-debug-28xsv\" (UID: \"4e1ee3e6-46ac-4d21-a581-8c3ae3d96476\") " pod="openshift-must-gather-f4tdk/crc-debug-28xsv" Dec 15 06:31:35 crc kubenswrapper[4747]: I1215 06:31:35.566430 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4e1ee3e6-46ac-4d21-a581-8c3ae3d96476-host\") pod \"crc-debug-28xsv\" (UID: \"4e1ee3e6-46ac-4d21-a581-8c3ae3d96476\") " pod="openshift-must-gather-f4tdk/crc-debug-28xsv" Dec 15 06:31:35 crc kubenswrapper[4747]: I1215 06:31:35.566556 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4e1ee3e6-46ac-4d21-a581-8c3ae3d96476-host\") pod \"crc-debug-28xsv\" (UID: \"4e1ee3e6-46ac-4d21-a581-8c3ae3d96476\") " pod="openshift-must-gather-f4tdk/crc-debug-28xsv" Dec 15 06:31:35 crc kubenswrapper[4747]: I1215 06:31:35.566674 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2bxq\" (UniqueName: \"kubernetes.io/projected/4e1ee3e6-46ac-4d21-a581-8c3ae3d96476-kube-api-access-k2bxq\") pod \"crc-debug-28xsv\" (UID: \"4e1ee3e6-46ac-4d21-a581-8c3ae3d96476\") " pod="openshift-must-gather-f4tdk/crc-debug-28xsv" Dec 15 06:31:35 crc kubenswrapper[4747]: I1215 06:31:35.587701 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2bxq\" (UniqueName: \"kubernetes.io/projected/4e1ee3e6-46ac-4d21-a581-8c3ae3d96476-kube-api-access-k2bxq\") pod \"crc-debug-28xsv\" (UID: \"4e1ee3e6-46ac-4d21-a581-8c3ae3d96476\") " pod="openshift-must-gather-f4tdk/crc-debug-28xsv" Dec 15 06:31:35 crc kubenswrapper[4747]: I1215 06:31:35.614696 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-f4tdk/crc-debug-28xsv" Dec 15 06:31:35 crc kubenswrapper[4747]: W1215 06:31:35.642386 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e1ee3e6_46ac_4d21_a581_8c3ae3d96476.slice/crio-2b8ce531f1f33885d0956a188fe7ef3afe119b2cdce4caad79662ebc866a8428 WatchSource:0}: Error finding container 2b8ce531f1f33885d0956a188fe7ef3afe119b2cdce4caad79662ebc866a8428: Status 404 returned error can't find the container with id 2b8ce531f1f33885d0956a188fe7ef3afe119b2cdce4caad79662ebc866a8428 Dec 15 06:31:36 crc kubenswrapper[4747]: I1215 06:31:36.550780 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-f4tdk/crc-debug-28xsv" event={"ID":"4e1ee3e6-46ac-4d21-a581-8c3ae3d96476","Type":"ContainerStarted","Data":"2b8ce531f1f33885d0956a188fe7ef3afe119b2cdce4caad79662ebc866a8428"} Dec 15 06:31:42 crc kubenswrapper[4747]: I1215 06:31:42.629170 4747 scope.go:117] "RemoveContainer" containerID="44cd12cf9c71b73bd3a2782f4e6fd9beb9278b65a8ace12d37e33f7b4a372d97" Dec 15 06:31:42 crc kubenswrapper[4747]: E1215 06:31:42.629867 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:31:50 crc kubenswrapper[4747]: I1215 06:31:50.677858 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-f4tdk/crc-debug-28xsv" event={"ID":"4e1ee3e6-46ac-4d21-a581-8c3ae3d96476","Type":"ContainerStarted","Data":"333ad0238c44dd77c37c551997aadbefe0461c2fe77a2fbd890ff09e2490ec7a"} Dec 15 06:31:50 crc kubenswrapper[4747]: I1215 06:31:50.693026 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-f4tdk/crc-debug-28xsv" podStartSLOduration=1.680343606 podStartE2EDuration="15.693006842s" podCreationTimestamp="2025-12-15 06:31:35 +0000 UTC" firstStartedPulling="2025-12-15 06:31:35.644035128 +0000 UTC m=+3259.340547045" lastFinishedPulling="2025-12-15 06:31:49.656698365 +0000 UTC m=+3273.353210281" observedRunningTime="2025-12-15 06:31:50.692661904 +0000 UTC m=+3274.389173822" watchObservedRunningTime="2025-12-15 06:31:50.693006842 +0000 UTC m=+3274.389518759" Dec 15 06:31:57 crc kubenswrapper[4747]: I1215 06:31:57.629712 4747 scope.go:117] "RemoveContainer" containerID="44cd12cf9c71b73bd3a2782f4e6fd9beb9278b65a8ace12d37e33f7b4a372d97" Dec 15 06:31:57 crc kubenswrapper[4747]: E1215 06:31:57.630482 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:32:07 crc kubenswrapper[4747]: I1215 06:32:07.833493 4747 generic.go:334] "Generic (PLEG): container finished" podID="4e1ee3e6-46ac-4d21-a581-8c3ae3d96476" containerID="333ad0238c44dd77c37c551997aadbefe0461c2fe77a2fbd890ff09e2490ec7a" exitCode=0 Dec 15 06:32:07 crc kubenswrapper[4747]: I1215 06:32:07.833596 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-f4tdk/crc-debug-28xsv" event={"ID":"4e1ee3e6-46ac-4d21-a581-8c3ae3d96476","Type":"ContainerDied","Data":"333ad0238c44dd77c37c551997aadbefe0461c2fe77a2fbd890ff09e2490ec7a"} Dec 15 06:32:08 crc kubenswrapper[4747]: I1215 06:32:08.630042 4747 scope.go:117] "RemoveContainer" containerID="44cd12cf9c71b73bd3a2782f4e6fd9beb9278b65a8ace12d37e33f7b4a372d97" Dec 15 06:32:08 crc kubenswrapper[4747]: E1215 06:32:08.630416 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:32:08 crc kubenswrapper[4747]: I1215 06:32:08.952844 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-f4tdk/crc-debug-28xsv" Dec 15 06:32:08 crc kubenswrapper[4747]: I1215 06:32:08.974697 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2bxq\" (UniqueName: \"kubernetes.io/projected/4e1ee3e6-46ac-4d21-a581-8c3ae3d96476-kube-api-access-k2bxq\") pod \"4e1ee3e6-46ac-4d21-a581-8c3ae3d96476\" (UID: \"4e1ee3e6-46ac-4d21-a581-8c3ae3d96476\") " Dec 15 06:32:08 crc kubenswrapper[4747]: I1215 06:32:08.975261 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4e1ee3e6-46ac-4d21-a581-8c3ae3d96476-host\") pod \"4e1ee3e6-46ac-4d21-a581-8c3ae3d96476\" (UID: \"4e1ee3e6-46ac-4d21-a581-8c3ae3d96476\") " Dec 15 06:32:08 crc kubenswrapper[4747]: I1215 06:32:08.976776 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e1ee3e6-46ac-4d21-a581-8c3ae3d96476-host" (OuterVolumeSpecName: "host") pod "4e1ee3e6-46ac-4d21-a581-8c3ae3d96476" (UID: "4e1ee3e6-46ac-4d21-a581-8c3ae3d96476"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 15 06:32:08 crc kubenswrapper[4747]: I1215 06:32:08.983128 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e1ee3e6-46ac-4d21-a581-8c3ae3d96476-kube-api-access-k2bxq" (OuterVolumeSpecName: "kube-api-access-k2bxq") pod "4e1ee3e6-46ac-4d21-a581-8c3ae3d96476" (UID: "4e1ee3e6-46ac-4d21-a581-8c3ae3d96476"). InnerVolumeSpecName "kube-api-access-k2bxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 06:32:08 crc kubenswrapper[4747]: I1215 06:32:08.999538 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-f4tdk/crc-debug-28xsv"] Dec 15 06:32:09 crc kubenswrapper[4747]: I1215 06:32:09.008015 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-f4tdk/crc-debug-28xsv"] Dec 15 06:32:09 crc kubenswrapper[4747]: I1215 06:32:09.078060 4747 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4e1ee3e6-46ac-4d21-a581-8c3ae3d96476-host\") on node \"crc\" DevicePath \"\"" Dec 15 06:32:09 crc kubenswrapper[4747]: I1215 06:32:09.078090 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2bxq\" (UniqueName: \"kubernetes.io/projected/4e1ee3e6-46ac-4d21-a581-8c3ae3d96476-kube-api-access-k2bxq\") on node \"crc\" DevicePath \"\"" Dec 15 06:32:09 crc kubenswrapper[4747]: I1215 06:32:09.852461 4747 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b8ce531f1f33885d0956a188fe7ef3afe119b2cdce4caad79662ebc866a8428" Dec 15 06:32:09 crc kubenswrapper[4747]: I1215 06:32:09.852535 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-f4tdk/crc-debug-28xsv" Dec 15 06:32:10 crc kubenswrapper[4747]: I1215 06:32:10.167343 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-f4tdk/crc-debug-mlxvv"] Dec 15 06:32:10 crc kubenswrapper[4747]: E1215 06:32:10.168029 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e1ee3e6-46ac-4d21-a581-8c3ae3d96476" containerName="container-00" Dec 15 06:32:10 crc kubenswrapper[4747]: I1215 06:32:10.168046 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e1ee3e6-46ac-4d21-a581-8c3ae3d96476" containerName="container-00" Dec 15 06:32:10 crc kubenswrapper[4747]: I1215 06:32:10.168279 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e1ee3e6-46ac-4d21-a581-8c3ae3d96476" containerName="container-00" Dec 15 06:32:10 crc kubenswrapper[4747]: I1215 06:32:10.168909 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-f4tdk/crc-debug-mlxvv" Dec 15 06:32:10 crc kubenswrapper[4747]: I1215 06:32:10.196127 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7b3c4d9d-e4da-4434-98dd-80503186cebf-host\") pod \"crc-debug-mlxvv\" (UID: \"7b3c4d9d-e4da-4434-98dd-80503186cebf\") " pod="openshift-must-gather-f4tdk/crc-debug-mlxvv" Dec 15 06:32:10 crc kubenswrapper[4747]: I1215 06:32:10.196374 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwwnj\" (UniqueName: \"kubernetes.io/projected/7b3c4d9d-e4da-4434-98dd-80503186cebf-kube-api-access-pwwnj\") pod \"crc-debug-mlxvv\" (UID: \"7b3c4d9d-e4da-4434-98dd-80503186cebf\") " pod="openshift-must-gather-f4tdk/crc-debug-mlxvv" Dec 15 06:32:10 crc kubenswrapper[4747]: I1215 06:32:10.298509 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7b3c4d9d-e4da-4434-98dd-80503186cebf-host\") pod \"crc-debug-mlxvv\" (UID: \"7b3c4d9d-e4da-4434-98dd-80503186cebf\") " pod="openshift-must-gather-f4tdk/crc-debug-mlxvv" Dec 15 06:32:10 crc kubenswrapper[4747]: I1215 06:32:10.298638 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7b3c4d9d-e4da-4434-98dd-80503186cebf-host\") pod \"crc-debug-mlxvv\" (UID: \"7b3c4d9d-e4da-4434-98dd-80503186cebf\") " pod="openshift-must-gather-f4tdk/crc-debug-mlxvv" Dec 15 06:32:10 crc kubenswrapper[4747]: I1215 06:32:10.298682 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwwnj\" (UniqueName: \"kubernetes.io/projected/7b3c4d9d-e4da-4434-98dd-80503186cebf-kube-api-access-pwwnj\") pod \"crc-debug-mlxvv\" (UID: \"7b3c4d9d-e4da-4434-98dd-80503186cebf\") " pod="openshift-must-gather-f4tdk/crc-debug-mlxvv" Dec 15 06:32:10 crc kubenswrapper[4747]: I1215 06:32:10.316713 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwwnj\" (UniqueName: \"kubernetes.io/projected/7b3c4d9d-e4da-4434-98dd-80503186cebf-kube-api-access-pwwnj\") pod \"crc-debug-mlxvv\" (UID: \"7b3c4d9d-e4da-4434-98dd-80503186cebf\") " pod="openshift-must-gather-f4tdk/crc-debug-mlxvv" Dec 15 06:32:10 crc kubenswrapper[4747]: I1215 06:32:10.485171 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-f4tdk/crc-debug-mlxvv" Dec 15 06:32:10 crc kubenswrapper[4747]: W1215 06:32:10.553802 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b3c4d9d_e4da_4434_98dd_80503186cebf.slice/crio-763fceea4eff5d9b9d5aea6a0092b38062cd8ca44a6323b5b28cef71e0aed7b1 WatchSource:0}: Error finding container 763fceea4eff5d9b9d5aea6a0092b38062cd8ca44a6323b5b28cef71e0aed7b1: Status 404 returned error can't find the container with id 763fceea4eff5d9b9d5aea6a0092b38062cd8ca44a6323b5b28cef71e0aed7b1 Dec 15 06:32:10 crc kubenswrapper[4747]: I1215 06:32:10.639415 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e1ee3e6-46ac-4d21-a581-8c3ae3d96476" path="/var/lib/kubelet/pods/4e1ee3e6-46ac-4d21-a581-8c3ae3d96476/volumes" Dec 15 06:32:10 crc kubenswrapper[4747]: I1215 06:32:10.868769 4747 generic.go:334] "Generic (PLEG): container finished" podID="7b3c4d9d-e4da-4434-98dd-80503186cebf" containerID="f7ad480bdd4756078b189226b50fb167050a2d80811c10bfdd42823ce2000b06" exitCode=1 Dec 15 06:32:10 crc kubenswrapper[4747]: I1215 06:32:10.869137 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-f4tdk/crc-debug-mlxvv" event={"ID":"7b3c4d9d-e4da-4434-98dd-80503186cebf","Type":"ContainerDied","Data":"f7ad480bdd4756078b189226b50fb167050a2d80811c10bfdd42823ce2000b06"} Dec 15 06:32:10 crc kubenswrapper[4747]: I1215 06:32:10.869188 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-f4tdk/crc-debug-mlxvv" event={"ID":"7b3c4d9d-e4da-4434-98dd-80503186cebf","Type":"ContainerStarted","Data":"763fceea4eff5d9b9d5aea6a0092b38062cd8ca44a6323b5b28cef71e0aed7b1"} Dec 15 06:32:10 crc kubenswrapper[4747]: I1215 06:32:10.921640 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-f4tdk/crc-debug-mlxvv"] Dec 15 06:32:10 crc kubenswrapper[4747]: I1215 06:32:10.931209 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-f4tdk/crc-debug-mlxvv"] Dec 15 06:32:11 crc kubenswrapper[4747]: I1215 06:32:11.963991 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-f4tdk/crc-debug-mlxvv" Dec 15 06:32:12 crc kubenswrapper[4747]: I1215 06:32:12.138438 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwwnj\" (UniqueName: \"kubernetes.io/projected/7b3c4d9d-e4da-4434-98dd-80503186cebf-kube-api-access-pwwnj\") pod \"7b3c4d9d-e4da-4434-98dd-80503186cebf\" (UID: \"7b3c4d9d-e4da-4434-98dd-80503186cebf\") " Dec 15 06:32:12 crc kubenswrapper[4747]: I1215 06:32:12.139607 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7b3c4d9d-e4da-4434-98dd-80503186cebf-host\") pod \"7b3c4d9d-e4da-4434-98dd-80503186cebf\" (UID: \"7b3c4d9d-e4da-4434-98dd-80503186cebf\") " Dec 15 06:32:12 crc kubenswrapper[4747]: I1215 06:32:12.140321 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b3c4d9d-e4da-4434-98dd-80503186cebf-host" (OuterVolumeSpecName: "host") pod "7b3c4d9d-e4da-4434-98dd-80503186cebf" (UID: "7b3c4d9d-e4da-4434-98dd-80503186cebf"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 15 06:32:12 crc kubenswrapper[4747]: I1215 06:32:12.141446 4747 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7b3c4d9d-e4da-4434-98dd-80503186cebf-host\") on node \"crc\" DevicePath \"\"" Dec 15 06:32:12 crc kubenswrapper[4747]: I1215 06:32:12.165082 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b3c4d9d-e4da-4434-98dd-80503186cebf-kube-api-access-pwwnj" (OuterVolumeSpecName: "kube-api-access-pwwnj") pod "7b3c4d9d-e4da-4434-98dd-80503186cebf" (UID: "7b3c4d9d-e4da-4434-98dd-80503186cebf"). InnerVolumeSpecName "kube-api-access-pwwnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 06:32:12 crc kubenswrapper[4747]: I1215 06:32:12.244435 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwwnj\" (UniqueName: \"kubernetes.io/projected/7b3c4d9d-e4da-4434-98dd-80503186cebf-kube-api-access-pwwnj\") on node \"crc\" DevicePath \"\"" Dec 15 06:32:12 crc kubenswrapper[4747]: I1215 06:32:12.639344 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b3c4d9d-e4da-4434-98dd-80503186cebf" path="/var/lib/kubelet/pods/7b3c4d9d-e4da-4434-98dd-80503186cebf/volumes" Dec 15 06:32:12 crc kubenswrapper[4747]: I1215 06:32:12.888474 4747 scope.go:117] "RemoveContainer" containerID="f7ad480bdd4756078b189226b50fb167050a2d80811c10bfdd42823ce2000b06" Dec 15 06:32:12 crc kubenswrapper[4747]: I1215 06:32:12.888518 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-f4tdk/crc-debug-mlxvv" Dec 15 06:32:20 crc kubenswrapper[4747]: I1215 06:32:20.630016 4747 scope.go:117] "RemoveContainer" containerID="44cd12cf9c71b73bd3a2782f4e6fd9beb9278b65a8ace12d37e33f7b4a372d97" Dec 15 06:32:20 crc kubenswrapper[4747]: E1215 06:32:20.630714 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:32:28 crc kubenswrapper[4747]: I1215 06:32:28.586796 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-79fcd98c9d-ccgjm_ecaeb0c4-ae67-4901-bc77-863b3a8c5c03/barbican-api/0.log" Dec 15 06:32:28 crc kubenswrapper[4747]: I1215 06:32:28.670262 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-79fcd98c9d-ccgjm_ecaeb0c4-ae67-4901-bc77-863b3a8c5c03/barbican-api-log/0.log" Dec 15 06:32:28 crc kubenswrapper[4747]: I1215 06:32:28.749043 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-fb9798db-mhqvv_5adecd4c-fd5a-4186-866f-2de0e4f9a859/barbican-keystone-listener/0.log" Dec 15 06:32:28 crc kubenswrapper[4747]: I1215 06:32:28.799326 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-fb9798db-mhqvv_5adecd4c-fd5a-4186-866f-2de0e4f9a859/barbican-keystone-listener-log/0.log" Dec 15 06:32:28 crc kubenswrapper[4747]: I1215 06:32:28.928464 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-9b996c647-vsbr7_fc262319-2445-42a7-9fb4-46f640216e00/barbican-worker/0.log" Dec 15 06:32:28 crc kubenswrapper[4747]: I1215 06:32:28.938488 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-9b996c647-vsbr7_fc262319-2445-42a7-9fb4-46f640216e00/barbican-worker-log/0.log" Dec 15 06:32:29 crc kubenswrapper[4747]: I1215 06:32:29.035406 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-xl6qh_2af42599-0cda-45de-b1fe-9bed5ad6f035/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 15 06:32:29 crc kubenswrapper[4747]: I1215 06:32:29.146905 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_34eac981-4ed2-4654-b4b0-f52ac5c7aeda/ceilometer-central-agent/0.log" Dec 15 06:32:29 crc kubenswrapper[4747]: I1215 06:32:29.193033 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_34eac981-4ed2-4654-b4b0-f52ac5c7aeda/ceilometer-notification-agent/0.log" Dec 15 06:32:29 crc kubenswrapper[4747]: I1215 06:32:29.228734 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_34eac981-4ed2-4654-b4b0-f52ac5c7aeda/proxy-httpd/0.log" Dec 15 06:32:29 crc kubenswrapper[4747]: I1215 06:32:29.302596 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_34eac981-4ed2-4654-b4b0-f52ac5c7aeda/sg-core/0.log" Dec 15 06:32:29 crc kubenswrapper[4747]: I1215 06:32:29.391571 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f66857a8-55e6-4e4f-ba1c-23bc5afec36b/cinder-api-log/0.log" Dec 15 06:32:29 crc kubenswrapper[4747]: I1215 06:32:29.522481 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f66857a8-55e6-4e4f-ba1c-23bc5afec36b/cinder-api/0.log" Dec 15 06:32:29 crc kubenswrapper[4747]: I1215 06:32:29.558435 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_de848267-fecb-4856-98c8-e81c3cfbb156/cinder-scheduler/0.log" Dec 15 06:32:29 crc kubenswrapper[4747]: I1215 06:32:29.578987 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_de848267-fecb-4856-98c8-e81c3cfbb156/probe/0.log" Dec 15 06:32:29 crc kubenswrapper[4747]: I1215 06:32:29.716227 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-dj5jz_163accf5-f1cd-48a4-93e3-4c7e6172470e/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 15 06:32:29 crc kubenswrapper[4747]: I1215 06:32:29.770756 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-594cx_23f33913-7e72-4eee-bd81-3561906af7fb/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 15 06:32:29 crc kubenswrapper[4747]: I1215 06:32:29.891468 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-775bb8f95f-twm2m_ad0490f9-1430-4511-b8cc-139a6c656b48/init/0.log" Dec 15 06:32:30 crc kubenswrapper[4747]: I1215 06:32:30.066185 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-775bb8f95f-twm2m_ad0490f9-1430-4511-b8cc-139a6c656b48/init/0.log" Dec 15 06:32:30 crc kubenswrapper[4747]: I1215 06:32:30.093805 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-775bb8f95f-twm2m_ad0490f9-1430-4511-b8cc-139a6c656b48/dnsmasq-dns/0.log" Dec 15 06:32:30 crc kubenswrapper[4747]: I1215 06:32:30.118168 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-tx8kh_e1073c0b-63fe-4562-bc3c-953bd3697022/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 15 06:32:30 crc kubenswrapper[4747]: I1215 06:32:30.251968 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_32596d05-cc4c-41f3-87b0-a69ff49aba9d/glance-httpd/0.log" Dec 15 06:32:30 crc kubenswrapper[4747]: I1215 06:32:30.273450 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_32596d05-cc4c-41f3-87b0-a69ff49aba9d/glance-log/0.log" Dec 15 06:32:30 crc kubenswrapper[4747]: I1215 06:32:30.423997 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_89218c2b-2e98-43cc-a4b4-3e741773bfb8/glance-log/0.log" Dec 15 06:32:30 crc kubenswrapper[4747]: I1215 06:32:30.447355 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_89218c2b-2e98-43cc-a4b4-3e741773bfb8/glance-httpd/0.log" Dec 15 06:32:30 crc kubenswrapper[4747]: I1215 06:32:30.572183 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-tftvb_4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Dec 15 06:32:30 crc kubenswrapper[4747]: I1215 06:32:30.712129 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-kp4bl_589b27c2-c1d7-423e-b324-10ebc183f51d/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 15 06:32:30 crc kubenswrapper[4747]: I1215 06:32:30.948421 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29429641-58jbc_e52cba4b-1373-4ebc-8e01-f0cb86d099ea/keystone-cron/0.log" Dec 15 06:32:31 crc kubenswrapper[4747]: I1215 06:32:31.106146 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6b5fccc9fc-25v6s_3f0cf723-d247-4d37-95f2-2ba1318f3e27/keystone-api/0.log" Dec 15 06:32:31 crc kubenswrapper[4747]: I1215 06:32:31.121711 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_39eb3298-a864-45a5-b1a1-df263390967d/kube-state-metrics/0.log" Dec 15 06:32:31 crc kubenswrapper[4747]: I1215 06:32:31.308344 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-rdwlk_56848ff3-1ce9-42b3-be44-5b8d4280c9a1/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Dec 15 06:32:31 crc kubenswrapper[4747]: I1215 06:32:31.470703 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_8828a0c4-9d91-45ba-a6f7-3bd720a9596b/memcached/0.log" Dec 15 06:32:31 crc kubenswrapper[4747]: I1215 06:32:31.667772 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7b9f9565dc-vlcmk_8e5a51e1-29fc-4fe7-b6d4-5a3227a93ec7/neutron-api/0.log" Dec 15 06:32:31 crc kubenswrapper[4747]: I1215 06:32:31.669742 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7b9f9565dc-vlcmk_8e5a51e1-29fc-4fe7-b6d4-5a3227a93ec7/neutron-httpd/0.log" Dec 15 06:32:31 crc kubenswrapper[4747]: I1215 06:32:31.681608 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b_cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Dec 15 06:32:32 crc kubenswrapper[4747]: I1215 06:32:32.075883 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_ead9df5f-294c-464e-b416-743ad9245464/nova-cell0-conductor-conductor/0.log" Dec 15 06:32:32 crc kubenswrapper[4747]: I1215 06:32:32.162222 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_7d859c86-54f1-459b-82a5-1ed6739f42f9/nova-api-log/0.log" Dec 15 06:32:32 crc kubenswrapper[4747]: I1215 06:32:32.185472 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_8472a77c-3c9b-4fa1-9572-cc21f9c2b814/nova-cell1-conductor-conductor/0.log" Dec 15 06:32:32 crc kubenswrapper[4747]: I1215 06:32:32.305438 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_7d859c86-54f1-459b-82a5-1ed6739f42f9/nova-api-api/0.log" Dec 15 06:32:32 crc kubenswrapper[4747]: I1215 06:32:32.327671 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_51032daa-0c9c-4794-9422-2ea37212e21e/nova-cell1-novncproxy-novncproxy/0.log" Dec 15 06:32:32 crc kubenswrapper[4747]: I1215 06:32:32.421436 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-v7cvg_6a04d0c3-49fa-44ad-ab27-08ba583d1142/nova-edpm-deployment-openstack-edpm-ipam/0.log" Dec 15 06:32:32 crc kubenswrapper[4747]: I1215 06:32:32.574972 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_e45a17a8-29f1-40e2-96ae-f2db0b32407e/nova-metadata-log/0.log" Dec 15 06:32:32 crc kubenswrapper[4747]: I1215 06:32:32.704711 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_13aa9deb-71b7-4adf-858c-89c461427547/nova-scheduler-scheduler/0.log" Dec 15 06:32:32 crc kubenswrapper[4747]: I1215 06:32:32.775852 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_22da0dca-a59a-40f7-8dd2-95305eea5ee0/mysql-bootstrap/0.log" Dec 15 06:32:33 crc kubenswrapper[4747]: I1215 06:32:33.021797 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f242c5ef-84fc-4437-86a0-0175e8ea123b/mysql-bootstrap/0.log" Dec 15 06:32:33 crc kubenswrapper[4747]: I1215 06:32:33.026602 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_22da0dca-a59a-40f7-8dd2-95305eea5ee0/mysql-bootstrap/0.log" Dec 15 06:32:33 crc kubenswrapper[4747]: I1215 06:32:33.068705 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_22da0dca-a59a-40f7-8dd2-95305eea5ee0/galera/0.log" Dec 15 06:32:33 crc kubenswrapper[4747]: I1215 06:32:33.261805 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f242c5ef-84fc-4437-86a0-0175e8ea123b/mysql-bootstrap/0.log" Dec 15 06:32:33 crc kubenswrapper[4747]: I1215 06:32:33.263322 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_e45a17a8-29f1-40e2-96ae-f2db0b32407e/nova-metadata-metadata/0.log" Dec 15 06:32:33 crc kubenswrapper[4747]: I1215 06:32:33.324724 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f242c5ef-84fc-4437-86a0-0175e8ea123b/galera/0.log" Dec 15 06:32:33 crc kubenswrapper[4747]: I1215 06:32:33.335241 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_03ee9ab5-c184-4473-ba41-5609f6aa29df/openstackclient/0.log" Dec 15 06:32:33 crc kubenswrapper[4747]: I1215 06:32:33.455655 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-b65n4_becaa3b6-8cd5-4e55-9a81-0a21fec0a70b/ovn-controller/0.log" Dec 15 06:32:33 crc kubenswrapper[4747]: I1215 06:32:33.494027 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-pl88m_de8a2190-14e4-44fa-a3a7-18182a6b4df6/openstack-network-exporter/0.log" Dec 15 06:32:33 crc kubenswrapper[4747]: I1215 06:32:33.630250 4747 scope.go:117] "RemoveContainer" containerID="44cd12cf9c71b73bd3a2782f4e6fd9beb9278b65a8ace12d37e33f7b4a372d97" Dec 15 06:32:33 crc kubenswrapper[4747]: E1215 06:32:33.630523 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:32:33 crc kubenswrapper[4747]: I1215 06:32:33.652024 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jmz8h_dca41dd5-5747-42a1-8703-30ae549342b7/ovsdb-server-init/0.log" Dec 15 06:32:33 crc kubenswrapper[4747]: I1215 06:32:33.839305 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jmz8h_dca41dd5-5747-42a1-8703-30ae549342b7/ovs-vswitchd/0.log" Dec 15 06:32:33 crc kubenswrapper[4747]: I1215 06:32:33.839657 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jmz8h_dca41dd5-5747-42a1-8703-30ae549342b7/ovsdb-server-init/0.log" Dec 15 06:32:33 crc kubenswrapper[4747]: I1215 06:32:33.846804 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jmz8h_dca41dd5-5747-42a1-8703-30ae549342b7/ovsdb-server/0.log" Dec 15 06:32:33 crc kubenswrapper[4747]: I1215 06:32:33.918595 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-cppqm_ae0fffb8-5fa6-4351-83ac-e2687b00d983/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Dec 15 06:32:34 crc kubenswrapper[4747]: I1215 06:32:34.001964 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_c9efeed7-bf14-463d-829f-b3e95d8323b2/openstack-network-exporter/0.log" Dec 15 06:32:34 crc kubenswrapper[4747]: I1215 06:32:34.022803 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_c9efeed7-bf14-463d-829f-b3e95d8323b2/ovn-northd/0.log" Dec 15 06:32:34 crc kubenswrapper[4747]: I1215 06:32:34.253971 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_d84a0b88-fbfb-4d28-89e0-5a64b4a1430f/openstack-network-exporter/0.log" Dec 15 06:32:34 crc kubenswrapper[4747]: I1215 06:32:34.305860 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_d84a0b88-fbfb-4d28-89e0-5a64b4a1430f/ovsdbserver-nb/0.log" Dec 15 06:32:34 crc kubenswrapper[4747]: I1215 06:32:34.358428 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_633ee263-eae2-4211-ae9e-d0efd7f7ac2f/openstack-network-exporter/0.log" Dec 15 06:32:34 crc kubenswrapper[4747]: I1215 06:32:34.433753 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_633ee263-eae2-4211-ae9e-d0efd7f7ac2f/ovsdbserver-sb/0.log" Dec 15 06:32:34 crc kubenswrapper[4747]: I1215 06:32:34.503177 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-d77548fc6-2zqkd_18ea26dc-78f1-479e-9e7c-722632f9304d/placement-api/0.log" Dec 15 06:32:34 crc kubenswrapper[4747]: I1215 06:32:34.582838 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-d77548fc6-2zqkd_18ea26dc-78f1-479e-9e7c-722632f9304d/placement-log/0.log" Dec 15 06:32:34 crc kubenswrapper[4747]: I1215 06:32:34.662721 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_42363452-e04c-462e-8341-6f3f99392357/setup-container/0.log" Dec 15 06:32:34 crc kubenswrapper[4747]: I1215 06:32:34.829805 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7a527363-fdfb-4bbe-a50e-41923c5cc78c/setup-container/0.log" Dec 15 06:32:34 crc kubenswrapper[4747]: I1215 06:32:34.854417 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_42363452-e04c-462e-8341-6f3f99392357/setup-container/0.log" Dec 15 06:32:34 crc kubenswrapper[4747]: I1215 06:32:34.881155 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_42363452-e04c-462e-8341-6f3f99392357/rabbitmq/0.log" Dec 15 06:32:34 crc kubenswrapper[4747]: I1215 06:32:34.999610 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7a527363-fdfb-4bbe-a50e-41923c5cc78c/setup-container/0.log" Dec 15 06:32:35 crc kubenswrapper[4747]: I1215 06:32:35.002212 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7a527363-fdfb-4bbe-a50e-41923c5cc78c/rabbitmq/0.log" Dec 15 06:32:35 crc kubenswrapper[4747]: I1215 06:32:35.028146 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-lr6h9_d87302aa-4741-47b7-8126-aaeeb74ace60/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 15 06:32:35 crc kubenswrapper[4747]: I1215 06:32:35.163495 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-nngr7_9a1bff2c-a33c-4816-998e-243617f6e473/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 15 06:32:35 crc kubenswrapper[4747]: I1215 06:32:35.194325 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-kps28_d11ad7a8-e6c0-497a-8a1a-0b82be444a86/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 15 06:32:35 crc kubenswrapper[4747]: I1215 06:32:35.264375 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-jbxq5_3be348eb-7098-4347-b98e-dcf987dd854e/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 15 06:32:35 crc kubenswrapper[4747]: I1215 06:32:35.433230 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-cvt9r_adb1376f-7db9-4946-8843-44313c04df54/ssh-known-hosts-edpm-deployment/0.log" Dec 15 06:32:35 crc kubenswrapper[4747]: I1215 06:32:35.538634 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-84688cc58c-2mrlh_01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8/proxy-httpd/0.log" Dec 15 06:32:35 crc kubenswrapper[4747]: I1215 06:32:35.571864 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-84688cc58c-2mrlh_01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8/proxy-server/0.log" Dec 15 06:32:35 crc kubenswrapper[4747]: I1215 06:32:35.635736 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-f2kl2_6a5299e8-666f-431f-9ecc-5dcc74352e38/swift-ring-rebalance/0.log" Dec 15 06:32:35 crc kubenswrapper[4747]: I1215 06:32:35.699665 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08c6df63-e1b2-4194-9bbe-b07410de16e7/account-auditor/0.log" Dec 15 06:32:35 crc kubenswrapper[4747]: I1215 06:32:35.728575 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08c6df63-e1b2-4194-9bbe-b07410de16e7/account-reaper/0.log" Dec 15 06:32:35 crc kubenswrapper[4747]: I1215 06:32:35.791480 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08c6df63-e1b2-4194-9bbe-b07410de16e7/account-replicator/0.log" Dec 15 06:32:35 crc kubenswrapper[4747]: I1215 06:32:35.816038 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08c6df63-e1b2-4194-9bbe-b07410de16e7/account-server/0.log" Dec 15 06:32:35 crc kubenswrapper[4747]: I1215 06:32:35.842835 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08c6df63-e1b2-4194-9bbe-b07410de16e7/container-auditor/0.log" Dec 15 06:32:35 crc kubenswrapper[4747]: I1215 06:32:35.904746 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08c6df63-e1b2-4194-9bbe-b07410de16e7/container-server/0.log" Dec 15 06:32:35 crc kubenswrapper[4747]: I1215 06:32:35.910734 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08c6df63-e1b2-4194-9bbe-b07410de16e7/container-replicator/0.log" Dec 15 06:32:35 crc kubenswrapper[4747]: I1215 06:32:35.953506 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08c6df63-e1b2-4194-9bbe-b07410de16e7/container-updater/0.log" Dec 15 06:32:36 crc kubenswrapper[4747]: I1215 06:32:36.003129 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08c6df63-e1b2-4194-9bbe-b07410de16e7/object-auditor/0.log" Dec 15 06:32:36 crc kubenswrapper[4747]: I1215 06:32:36.024032 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08c6df63-e1b2-4194-9bbe-b07410de16e7/object-expirer/0.log" Dec 15 06:32:36 crc kubenswrapper[4747]: I1215 06:32:36.105667 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08c6df63-e1b2-4194-9bbe-b07410de16e7/object-server/0.log" Dec 15 06:32:36 crc kubenswrapper[4747]: I1215 06:32:36.111307 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08c6df63-e1b2-4194-9bbe-b07410de16e7/object-replicator/0.log" Dec 15 06:32:36 crc kubenswrapper[4747]: I1215 06:32:36.161582 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08c6df63-e1b2-4194-9bbe-b07410de16e7/object-updater/0.log" Dec 15 06:32:36 crc kubenswrapper[4747]: I1215 06:32:36.174669 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08c6df63-e1b2-4194-9bbe-b07410de16e7/rsync/0.log" Dec 15 06:32:36 crc kubenswrapper[4747]: I1215 06:32:36.230504 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08c6df63-e1b2-4194-9bbe-b07410de16e7/swift-recon-cron/0.log" Dec 15 06:32:36 crc kubenswrapper[4747]: I1215 06:32:36.316997 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-f674t_a7d200be-a60e-4759-8772-1845c1ab0534/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Dec 15 06:32:36 crc kubenswrapper[4747]: I1215 06:32:36.416741 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_0feaf663-b187-479f-8129-5aa6bf3b9047/tempest-tests-tempest-tests-runner/0.log" Dec 15 06:32:36 crc kubenswrapper[4747]: I1215 06:32:36.516411 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_b3fb43ea-8919-4b3f-bfd7-27ee6d7e8a0b/test-operator-logs-container/0.log" Dec 15 06:32:36 crc kubenswrapper[4747]: I1215 06:32:36.625903 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-7rcwr_89aec499-875b-4b3b-8486-b01d8713b1c6/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 15 06:32:41 crc kubenswrapper[4747]: I1215 06:32:41.439346 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2tbd7"] Dec 15 06:32:41 crc kubenswrapper[4747]: E1215 06:32:41.439938 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b3c4d9d-e4da-4434-98dd-80503186cebf" containerName="container-00" Dec 15 06:32:41 crc kubenswrapper[4747]: I1215 06:32:41.439952 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b3c4d9d-e4da-4434-98dd-80503186cebf" containerName="container-00" Dec 15 06:32:41 crc kubenswrapper[4747]: I1215 06:32:41.440133 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b3c4d9d-e4da-4434-98dd-80503186cebf" containerName="container-00" Dec 15 06:32:41 crc kubenswrapper[4747]: I1215 06:32:41.441279 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2tbd7" Dec 15 06:32:41 crc kubenswrapper[4747]: I1215 06:32:41.455466 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2tbd7"] Dec 15 06:32:41 crc kubenswrapper[4747]: I1215 06:32:41.628623 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k448p\" (UniqueName: \"kubernetes.io/projected/6f8ff5e0-398a-43ca-a292-6da4ef44b19c-kube-api-access-k448p\") pod \"redhat-operators-2tbd7\" (UID: \"6f8ff5e0-398a-43ca-a292-6da4ef44b19c\") " pod="openshift-marketplace/redhat-operators-2tbd7" Dec 15 06:32:41 crc kubenswrapper[4747]: I1215 06:32:41.628714 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f8ff5e0-398a-43ca-a292-6da4ef44b19c-utilities\") pod \"redhat-operators-2tbd7\" (UID: \"6f8ff5e0-398a-43ca-a292-6da4ef44b19c\") " pod="openshift-marketplace/redhat-operators-2tbd7" Dec 15 06:32:41 crc kubenswrapper[4747]: I1215 06:32:41.628776 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f8ff5e0-398a-43ca-a292-6da4ef44b19c-catalog-content\") pod \"redhat-operators-2tbd7\" (UID: \"6f8ff5e0-398a-43ca-a292-6da4ef44b19c\") " pod="openshift-marketplace/redhat-operators-2tbd7" Dec 15 06:32:41 crc kubenswrapper[4747]: I1215 06:32:41.731061 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f8ff5e0-398a-43ca-a292-6da4ef44b19c-utilities\") pod \"redhat-operators-2tbd7\" (UID: \"6f8ff5e0-398a-43ca-a292-6da4ef44b19c\") " pod="openshift-marketplace/redhat-operators-2tbd7" Dec 15 06:32:41 crc kubenswrapper[4747]: I1215 06:32:41.731160 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f8ff5e0-398a-43ca-a292-6da4ef44b19c-catalog-content\") pod \"redhat-operators-2tbd7\" (UID: \"6f8ff5e0-398a-43ca-a292-6da4ef44b19c\") " pod="openshift-marketplace/redhat-operators-2tbd7" Dec 15 06:32:41 crc kubenswrapper[4747]: I1215 06:32:41.731302 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k448p\" (UniqueName: \"kubernetes.io/projected/6f8ff5e0-398a-43ca-a292-6da4ef44b19c-kube-api-access-k448p\") pod \"redhat-operators-2tbd7\" (UID: \"6f8ff5e0-398a-43ca-a292-6da4ef44b19c\") " pod="openshift-marketplace/redhat-operators-2tbd7" Dec 15 06:32:41 crc kubenswrapper[4747]: I1215 06:32:41.731597 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f8ff5e0-398a-43ca-a292-6da4ef44b19c-utilities\") pod \"redhat-operators-2tbd7\" (UID: \"6f8ff5e0-398a-43ca-a292-6da4ef44b19c\") " pod="openshift-marketplace/redhat-operators-2tbd7" Dec 15 06:32:41 crc kubenswrapper[4747]: I1215 06:32:41.731702 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f8ff5e0-398a-43ca-a292-6da4ef44b19c-catalog-content\") pod \"redhat-operators-2tbd7\" (UID: \"6f8ff5e0-398a-43ca-a292-6da4ef44b19c\") " pod="openshift-marketplace/redhat-operators-2tbd7" Dec 15 06:32:41 crc kubenswrapper[4747]: I1215 06:32:41.764801 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k448p\" (UniqueName: \"kubernetes.io/projected/6f8ff5e0-398a-43ca-a292-6da4ef44b19c-kube-api-access-k448p\") pod \"redhat-operators-2tbd7\" (UID: \"6f8ff5e0-398a-43ca-a292-6da4ef44b19c\") " pod="openshift-marketplace/redhat-operators-2tbd7" Dec 15 06:32:41 crc kubenswrapper[4747]: I1215 06:32:41.768212 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2tbd7" Dec 15 06:32:42 crc kubenswrapper[4747]: I1215 06:32:42.253010 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2tbd7"] Dec 15 06:32:43 crc kubenswrapper[4747]: I1215 06:32:43.135860 4747 generic.go:334] "Generic (PLEG): container finished" podID="6f8ff5e0-398a-43ca-a292-6da4ef44b19c" containerID="d3b1dcebd011db8dde4033d8b8353f55f36944fa6d0ebaa2bd7a888a6b547c35" exitCode=0 Dec 15 06:32:43 crc kubenswrapper[4747]: I1215 06:32:43.135979 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2tbd7" event={"ID":"6f8ff5e0-398a-43ca-a292-6da4ef44b19c","Type":"ContainerDied","Data":"d3b1dcebd011db8dde4033d8b8353f55f36944fa6d0ebaa2bd7a888a6b547c35"} Dec 15 06:32:43 crc kubenswrapper[4747]: I1215 06:32:43.136444 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2tbd7" event={"ID":"6f8ff5e0-398a-43ca-a292-6da4ef44b19c","Type":"ContainerStarted","Data":"23f2792e3231626879406e839efc9c648ac7f4ae1cce6d8440576334fb3959bd"} Dec 15 06:32:44 crc kubenswrapper[4747]: I1215 06:32:44.144447 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2tbd7" event={"ID":"6f8ff5e0-398a-43ca-a292-6da4ef44b19c","Type":"ContainerStarted","Data":"3c5b26cce81884571fb2c658eb716afe192a6c269417b31ce652951dc2e2a15a"} Dec 15 06:32:46 crc kubenswrapper[4747]: I1215 06:32:46.162604 4747 generic.go:334] "Generic (PLEG): container finished" podID="6f8ff5e0-398a-43ca-a292-6da4ef44b19c" containerID="3c5b26cce81884571fb2c658eb716afe192a6c269417b31ce652951dc2e2a15a" exitCode=0 Dec 15 06:32:46 crc kubenswrapper[4747]: I1215 06:32:46.162698 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2tbd7" event={"ID":"6f8ff5e0-398a-43ca-a292-6da4ef44b19c","Type":"ContainerDied","Data":"3c5b26cce81884571fb2c658eb716afe192a6c269417b31ce652951dc2e2a15a"} Dec 15 06:32:47 crc kubenswrapper[4747]: I1215 06:32:47.174115 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2tbd7" event={"ID":"6f8ff5e0-398a-43ca-a292-6da4ef44b19c","Type":"ContainerStarted","Data":"51881b3e17ddc222ce0d899cded01be0e42ef03a4b02e31f7c0829258bcb7296"} Dec 15 06:32:47 crc kubenswrapper[4747]: I1215 06:32:47.187714 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2tbd7" podStartSLOduration=2.557485093 podStartE2EDuration="6.18769905s" podCreationTimestamp="2025-12-15 06:32:41 +0000 UTC" firstStartedPulling="2025-12-15 06:32:43.138204515 +0000 UTC m=+3326.834716433" lastFinishedPulling="2025-12-15 06:32:46.768418472 +0000 UTC m=+3330.464930390" observedRunningTime="2025-12-15 06:32:47.186198629 +0000 UTC m=+3330.882710546" watchObservedRunningTime="2025-12-15 06:32:47.18769905 +0000 UTC m=+3330.884210967" Dec 15 06:32:48 crc kubenswrapper[4747]: I1215 06:32:48.630375 4747 scope.go:117] "RemoveContainer" containerID="44cd12cf9c71b73bd3a2782f4e6fd9beb9278b65a8ace12d37e33f7b4a372d97" Dec 15 06:32:48 crc kubenswrapper[4747]: E1215 06:32:48.631400 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:32:51 crc kubenswrapper[4747]: I1215 06:32:51.768461 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2tbd7" Dec 15 06:32:51 crc kubenswrapper[4747]: I1215 06:32:51.769511 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2tbd7" Dec 15 06:32:51 crc kubenswrapper[4747]: I1215 06:32:51.814407 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2tbd7" Dec 15 06:32:52 crc kubenswrapper[4747]: I1215 06:32:52.256853 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2tbd7" Dec 15 06:32:52 crc kubenswrapper[4747]: I1215 06:32:52.296644 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2tbd7"] Dec 15 06:32:54 crc kubenswrapper[4747]: I1215 06:32:54.235751 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2tbd7" podUID="6f8ff5e0-398a-43ca-a292-6da4ef44b19c" containerName="registry-server" containerID="cri-o://51881b3e17ddc222ce0d899cded01be0e42ef03a4b02e31f7c0829258bcb7296" gracePeriod=2 Dec 15 06:32:54 crc kubenswrapper[4747]: I1215 06:32:54.693093 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2tbd7" Dec 15 06:32:54 crc kubenswrapper[4747]: I1215 06:32:54.819773 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f8ff5e0-398a-43ca-a292-6da4ef44b19c-utilities\") pod \"6f8ff5e0-398a-43ca-a292-6da4ef44b19c\" (UID: \"6f8ff5e0-398a-43ca-a292-6da4ef44b19c\") " Dec 15 06:32:54 crc kubenswrapper[4747]: I1215 06:32:54.819869 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f8ff5e0-398a-43ca-a292-6da4ef44b19c-catalog-content\") pod \"6f8ff5e0-398a-43ca-a292-6da4ef44b19c\" (UID: \"6f8ff5e0-398a-43ca-a292-6da4ef44b19c\") " Dec 15 06:32:54 crc kubenswrapper[4747]: I1215 06:32:54.819972 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k448p\" (UniqueName: \"kubernetes.io/projected/6f8ff5e0-398a-43ca-a292-6da4ef44b19c-kube-api-access-k448p\") pod \"6f8ff5e0-398a-43ca-a292-6da4ef44b19c\" (UID: \"6f8ff5e0-398a-43ca-a292-6da4ef44b19c\") " Dec 15 06:32:54 crc kubenswrapper[4747]: I1215 06:32:54.820436 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f8ff5e0-398a-43ca-a292-6da4ef44b19c-utilities" (OuterVolumeSpecName: "utilities") pod "6f8ff5e0-398a-43ca-a292-6da4ef44b19c" (UID: "6f8ff5e0-398a-43ca-a292-6da4ef44b19c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 06:32:54 crc kubenswrapper[4747]: I1215 06:32:54.820845 4747 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f8ff5e0-398a-43ca-a292-6da4ef44b19c-utilities\") on node \"crc\" DevicePath \"\"" Dec 15 06:32:54 crc kubenswrapper[4747]: I1215 06:32:54.832368 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f8ff5e0-398a-43ca-a292-6da4ef44b19c-kube-api-access-k448p" (OuterVolumeSpecName: "kube-api-access-k448p") pod "6f8ff5e0-398a-43ca-a292-6da4ef44b19c" (UID: "6f8ff5e0-398a-43ca-a292-6da4ef44b19c"). InnerVolumeSpecName "kube-api-access-k448p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 06:32:54 crc kubenswrapper[4747]: I1215 06:32:54.914241 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f8ff5e0-398a-43ca-a292-6da4ef44b19c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6f8ff5e0-398a-43ca-a292-6da4ef44b19c" (UID: "6f8ff5e0-398a-43ca-a292-6da4ef44b19c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 06:32:54 crc kubenswrapper[4747]: I1215 06:32:54.923356 4747 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f8ff5e0-398a-43ca-a292-6da4ef44b19c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 15 06:32:54 crc kubenswrapper[4747]: I1215 06:32:54.923387 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k448p\" (UniqueName: \"kubernetes.io/projected/6f8ff5e0-398a-43ca-a292-6da4ef44b19c-kube-api-access-k448p\") on node \"crc\" DevicePath \"\"" Dec 15 06:32:55 crc kubenswrapper[4747]: I1215 06:32:55.245612 4747 generic.go:334] "Generic (PLEG): container finished" podID="6f8ff5e0-398a-43ca-a292-6da4ef44b19c" containerID="51881b3e17ddc222ce0d899cded01be0e42ef03a4b02e31f7c0829258bcb7296" exitCode=0 Dec 15 06:32:55 crc kubenswrapper[4747]: I1215 06:32:55.245697 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2tbd7" Dec 15 06:32:55 crc kubenswrapper[4747]: I1215 06:32:55.245720 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2tbd7" event={"ID":"6f8ff5e0-398a-43ca-a292-6da4ef44b19c","Type":"ContainerDied","Data":"51881b3e17ddc222ce0d899cded01be0e42ef03a4b02e31f7c0829258bcb7296"} Dec 15 06:32:55 crc kubenswrapper[4747]: I1215 06:32:55.248993 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2tbd7" event={"ID":"6f8ff5e0-398a-43ca-a292-6da4ef44b19c","Type":"ContainerDied","Data":"23f2792e3231626879406e839efc9c648ac7f4ae1cce6d8440576334fb3959bd"} Dec 15 06:32:55 crc kubenswrapper[4747]: I1215 06:32:55.249016 4747 scope.go:117] "RemoveContainer" containerID="51881b3e17ddc222ce0d899cded01be0e42ef03a4b02e31f7c0829258bcb7296" Dec 15 06:32:55 crc kubenswrapper[4747]: I1215 06:32:55.266973 4747 scope.go:117] "RemoveContainer" containerID="3c5b26cce81884571fb2c658eb716afe192a6c269417b31ce652951dc2e2a15a" Dec 15 06:32:55 crc kubenswrapper[4747]: I1215 06:32:55.277869 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2tbd7"] Dec 15 06:32:55 crc kubenswrapper[4747]: I1215 06:32:55.284267 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2tbd7"] Dec 15 06:32:55 crc kubenswrapper[4747]: I1215 06:32:55.307038 4747 scope.go:117] "RemoveContainer" containerID="d3b1dcebd011db8dde4033d8b8353f55f36944fa6d0ebaa2bd7a888a6b547c35" Dec 15 06:32:55 crc kubenswrapper[4747]: I1215 06:32:55.346267 4747 scope.go:117] "RemoveContainer" containerID="51881b3e17ddc222ce0d899cded01be0e42ef03a4b02e31f7c0829258bcb7296" Dec 15 06:32:55 crc kubenswrapper[4747]: E1215 06:32:55.346605 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51881b3e17ddc222ce0d899cded01be0e42ef03a4b02e31f7c0829258bcb7296\": container with ID starting with 51881b3e17ddc222ce0d899cded01be0e42ef03a4b02e31f7c0829258bcb7296 not found: ID does not exist" containerID="51881b3e17ddc222ce0d899cded01be0e42ef03a4b02e31f7c0829258bcb7296" Dec 15 06:32:55 crc kubenswrapper[4747]: I1215 06:32:55.346642 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51881b3e17ddc222ce0d899cded01be0e42ef03a4b02e31f7c0829258bcb7296"} err="failed to get container status \"51881b3e17ddc222ce0d899cded01be0e42ef03a4b02e31f7c0829258bcb7296\": rpc error: code = NotFound desc = could not find container \"51881b3e17ddc222ce0d899cded01be0e42ef03a4b02e31f7c0829258bcb7296\": container with ID starting with 51881b3e17ddc222ce0d899cded01be0e42ef03a4b02e31f7c0829258bcb7296 not found: ID does not exist" Dec 15 06:32:55 crc kubenswrapper[4747]: I1215 06:32:55.346668 4747 scope.go:117] "RemoveContainer" containerID="3c5b26cce81884571fb2c658eb716afe192a6c269417b31ce652951dc2e2a15a" Dec 15 06:32:55 crc kubenswrapper[4747]: E1215 06:32:55.347157 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c5b26cce81884571fb2c658eb716afe192a6c269417b31ce652951dc2e2a15a\": container with ID starting with 3c5b26cce81884571fb2c658eb716afe192a6c269417b31ce652951dc2e2a15a not found: ID does not exist" containerID="3c5b26cce81884571fb2c658eb716afe192a6c269417b31ce652951dc2e2a15a" Dec 15 06:32:55 crc kubenswrapper[4747]: I1215 06:32:55.347194 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c5b26cce81884571fb2c658eb716afe192a6c269417b31ce652951dc2e2a15a"} err="failed to get container status \"3c5b26cce81884571fb2c658eb716afe192a6c269417b31ce652951dc2e2a15a\": rpc error: code = NotFound desc = could not find container \"3c5b26cce81884571fb2c658eb716afe192a6c269417b31ce652951dc2e2a15a\": container with ID starting with 3c5b26cce81884571fb2c658eb716afe192a6c269417b31ce652951dc2e2a15a not found: ID does not exist" Dec 15 06:32:55 crc kubenswrapper[4747]: I1215 06:32:55.347220 4747 scope.go:117] "RemoveContainer" containerID="d3b1dcebd011db8dde4033d8b8353f55f36944fa6d0ebaa2bd7a888a6b547c35" Dec 15 06:32:55 crc kubenswrapper[4747]: E1215 06:32:55.347493 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3b1dcebd011db8dde4033d8b8353f55f36944fa6d0ebaa2bd7a888a6b547c35\": container with ID starting with d3b1dcebd011db8dde4033d8b8353f55f36944fa6d0ebaa2bd7a888a6b547c35 not found: ID does not exist" containerID="d3b1dcebd011db8dde4033d8b8353f55f36944fa6d0ebaa2bd7a888a6b547c35" Dec 15 06:32:55 crc kubenswrapper[4747]: I1215 06:32:55.347517 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3b1dcebd011db8dde4033d8b8353f55f36944fa6d0ebaa2bd7a888a6b547c35"} err="failed to get container status \"d3b1dcebd011db8dde4033d8b8353f55f36944fa6d0ebaa2bd7a888a6b547c35\": rpc error: code = NotFound desc = could not find container \"d3b1dcebd011db8dde4033d8b8353f55f36944fa6d0ebaa2bd7a888a6b547c35\": container with ID starting with d3b1dcebd011db8dde4033d8b8353f55f36944fa6d0ebaa2bd7a888a6b547c35 not found: ID does not exist" Dec 15 06:32:55 crc kubenswrapper[4747]: I1215 06:32:55.783841 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-95949466-pzsnr_966a3797-97c2-4e8d-8799-6b8a287efd78/manager/0.log" Dec 15 06:32:55 crc kubenswrapper[4747]: I1215 06:32:55.907871 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5cf45c46bd-ggkl6_50d161a9-2162-4642-bfd4-74bde1129134/manager/0.log" Dec 15 06:32:55 crc kubenswrapper[4747]: I1215 06:32:55.932526 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-66f8b87655-rkqrw_a9d4c90d-ecd6-4126-8d91-dfb784a64d54/manager/0.log" Dec 15 06:32:56 crc kubenswrapper[4747]: I1215 06:32:56.076363 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56fmjvsl_2c345b7d-bd2d-43c7-9f3f-906a003a24e5/util/0.log" Dec 15 06:32:56 crc kubenswrapper[4747]: I1215 06:32:56.194029 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56fmjvsl_2c345b7d-bd2d-43c7-9f3f-906a003a24e5/pull/0.log" Dec 15 06:32:56 crc kubenswrapper[4747]: I1215 06:32:56.208912 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56fmjvsl_2c345b7d-bd2d-43c7-9f3f-906a003a24e5/util/0.log" Dec 15 06:32:56 crc kubenswrapper[4747]: I1215 06:32:56.210657 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56fmjvsl_2c345b7d-bd2d-43c7-9f3f-906a003a24e5/pull/0.log" Dec 15 06:32:56 crc kubenswrapper[4747]: I1215 06:32:56.380659 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56fmjvsl_2c345b7d-bd2d-43c7-9f3f-906a003a24e5/util/0.log" Dec 15 06:32:56 crc kubenswrapper[4747]: I1215 06:32:56.408492 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56fmjvsl_2c345b7d-bd2d-43c7-9f3f-906a003a24e5/pull/0.log" Dec 15 06:32:56 crc kubenswrapper[4747]: I1215 06:32:56.414645 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56fmjvsl_2c345b7d-bd2d-43c7-9f3f-906a003a24e5/extract/0.log" Dec 15 06:32:56 crc kubenswrapper[4747]: I1215 06:32:56.572723 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-59b8dcb766-tcs4c_07926291-631c-415d-8aaa-c425852decd9/manager/0.log" Dec 15 06:32:56 crc kubenswrapper[4747]: I1215 06:32:56.589212 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-767f9d7567-hk9c4_ed7a99f7-83b8-48f4-9cc9-135af2e16529/manager/0.log" Dec 15 06:32:56 crc kubenswrapper[4747]: I1215 06:32:56.637807 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f8ff5e0-398a-43ca-a292-6da4ef44b19c" path="/var/lib/kubelet/pods/6f8ff5e0-398a-43ca-a292-6da4ef44b19c/volumes" Dec 15 06:32:56 crc kubenswrapper[4747]: I1215 06:32:56.732900 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6ccf486b9-cmgcn_c8a35ff2-385b-46d4-95e6-d7e85a7c8477/manager/0.log" Dec 15 06:32:57 crc kubenswrapper[4747]: I1215 06:32:57.009312 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-f458558d7-vf58x_b93e01ce-98e3-4941-8721-d9ce67414730/manager/0.log" Dec 15 06:32:57 crc kubenswrapper[4747]: I1215 06:32:57.040767 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-58944d7758-s79wq_bb8f1731-54b2-4d71-96fb-13fde067045b/manager/0.log" Dec 15 06:32:57 crc kubenswrapper[4747]: I1215 06:32:57.053899 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-5c7cbf548f-v7cjm_fdda9bcd-0316-4549-af8b-ae0e151e59d7/manager/0.log" Dec 15 06:32:57 crc kubenswrapper[4747]: I1215 06:32:57.191988 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5fdd9786f7-dg8cj_e6558c12-d59f-4593-9605-a7dc6c19e766/manager/0.log" Dec 15 06:32:57 crc kubenswrapper[4747]: I1215 06:32:57.240028 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-f76f4954c-5chln_dc8104ce-563e-4e6f-b61d-18e2bdc49879/manager/0.log" Dec 15 06:32:57 crc kubenswrapper[4747]: I1215 06:32:57.378375 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7cd87b778f-qw6tr_5f14ea23-34de-4d4b-971d-dc90d34c44a9/manager/0.log" Dec 15 06:32:57 crc kubenswrapper[4747]: I1215 06:32:57.475756 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5fbbf8b6cc-snvkz_60924e24-00f9-4f6a-bf7e-385f8e54a027/manager/0.log" Dec 15 06:32:57 crc kubenswrapper[4747]: I1215 06:32:57.589223 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-68c649d9d-sffcl_5a07861b-82a4-47c3-8255-3b76b44da9d6/manager/0.log" Dec 15 06:32:57 crc kubenswrapper[4747]: I1215 06:32:57.635385 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-689f887b54sfqvx_3858e881-df69-47eb-8a78-fa48f7ca7f87/manager/0.log" Dec 15 06:32:58 crc kubenswrapper[4747]: I1215 06:32:58.086831 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-57bbbf4567-4l6vr_e1d8f4a6-dd71-427f-98ac-5e77cc0fb1ae/operator/0.log" Dec 15 06:32:58 crc kubenswrapper[4747]: I1215 06:32:58.128341 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-ssgvg_600db6fb-c49e-40e5-a195-756c80b40b7d/registry-server/0.log" Dec 15 06:32:58 crc kubenswrapper[4747]: I1215 06:32:58.329215 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-bf6d4f946-jmxtj_e1cafba6-81fa-4f70-b79d-4d02cdd194a3/manager/0.log" Dec 15 06:32:58 crc kubenswrapper[4747]: I1215 06:32:58.516205 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8665b56d78-c2gjc_c1d38621-ff5b-4d92-8457-9568c6b67416/manager/0.log" Dec 15 06:32:58 crc kubenswrapper[4747]: I1215 06:32:58.559524 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-vtm79_3818fc80-b8e4-4dc2-9470-587cf10a2350/operator/0.log" Dec 15 06:32:58 crc kubenswrapper[4747]: I1215 06:32:58.747489 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5c6df8f9-tm9tq_4e1be8a6-df60-418b-911f-efbf8aa5cf5a/manager/0.log" Dec 15 06:32:58 crc kubenswrapper[4747]: I1215 06:32:58.890896 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-56f6fbdf6-ch5s4_e3f1bf4c-044b-49d5-be51-b853e2f6a7b0/manager/0.log" Dec 15 06:32:58 crc kubenswrapper[4747]: I1215 06:32:58.901819 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-97d456b9-gqlwk_df77558c-ad92-43a1-9d9a-e3fac782b0e8/manager/0.log" Dec 15 06:32:58 crc kubenswrapper[4747]: I1215 06:32:58.961114 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-756ccf86c7-6dlgk_3f5c0d61-d8f5-4bfb-87c1-4f795057abd2/manager/0.log" Dec 15 06:32:59 crc kubenswrapper[4747]: I1215 06:32:59.055711 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-55f78b7c4c-rgxgj_2e8d5dd7-baa6-49fb-9f9f-735905ac6e61/manager/0.log" Dec 15 06:33:01 crc kubenswrapper[4747]: I1215 06:33:01.629875 4747 scope.go:117] "RemoveContainer" containerID="44cd12cf9c71b73bd3a2782f4e6fd9beb9278b65a8ace12d37e33f7b4a372d97" Dec 15 06:33:01 crc kubenswrapper[4747]: E1215 06:33:01.630329 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:33:05 crc kubenswrapper[4747]: I1215 06:33:05.720650 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-84688cc58c-2mrlh" podUID="01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Dec 15 06:33:13 crc kubenswrapper[4747]: I1215 06:33:13.628810 4747 scope.go:117] "RemoveContainer" containerID="44cd12cf9c71b73bd3a2782f4e6fd9beb9278b65a8ace12d37e33f7b4a372d97" Dec 15 06:33:13 crc kubenswrapper[4747]: E1215 06:33:13.629784 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:33:15 crc kubenswrapper[4747]: I1215 06:33:15.480321 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-xkbmm_f139e81b-c534-4004-81b1-202a6b0e45f2/control-plane-machine-set-operator/0.log" Dec 15 06:33:15 crc kubenswrapper[4747]: I1215 06:33:15.621054 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-g46rv_efb2d24d-c4bc-4f9a-ae33-a7f3e6090a39/kube-rbac-proxy/0.log" Dec 15 06:33:15 crc kubenswrapper[4747]: I1215 06:33:15.637670 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-g46rv_efb2d24d-c4bc-4f9a-ae33-a7f3e6090a39/machine-api-operator/0.log" Dec 15 06:33:24 crc kubenswrapper[4747]: I1215 06:33:24.629382 4747 scope.go:117] "RemoveContainer" containerID="44cd12cf9c71b73bd3a2782f4e6fd9beb9278b65a8ace12d37e33f7b4a372d97" Dec 15 06:33:24 crc kubenswrapper[4747]: E1215 06:33:24.630045 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:33:26 crc kubenswrapper[4747]: I1215 06:33:26.847539 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-bwhbl_2dc6869c-9693-4dc8-81eb-4ff08e334aaf/cert-manager-controller/0.log" Dec 15 06:33:26 crc kubenswrapper[4747]: I1215 06:33:26.991368 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-t5flq_7e3304fa-a54c-4472-935a-aad6d8673d12/cert-manager-cainjector/0.log" Dec 15 06:33:27 crc kubenswrapper[4747]: I1215 06:33:27.042496 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-qmrz8_56932a48-4e8d-4052-b33e-daff9aeec190/cert-manager-webhook/0.log" Dec 15 06:33:35 crc kubenswrapper[4747]: I1215 06:33:35.630138 4747 scope.go:117] "RemoveContainer" containerID="44cd12cf9c71b73bd3a2782f4e6fd9beb9278b65a8ace12d37e33f7b4a372d97" Dec 15 06:33:35 crc kubenswrapper[4747]: E1215 06:33:35.631375 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:33:38 crc kubenswrapper[4747]: I1215 06:33:38.360560 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6ff7998486-tjfnm_8f1ea057-6f84-40ec-be2e-54583b3af99b/nmstate-console-plugin/0.log" Dec 15 06:33:38 crc kubenswrapper[4747]: I1215 06:33:38.505747 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-4vz6n_cc636dc4-0911-423b-8327-5b81d759c74a/nmstate-handler/0.log" Dec 15 06:33:38 crc kubenswrapper[4747]: I1215 06:33:38.582690 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-pm75r_b93f5376-bd46-4dc1-82aa-6b1db7622176/nmstate-metrics/0.log" Dec 15 06:33:38 crc kubenswrapper[4747]: I1215 06:33:38.594763 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-pm75r_b93f5376-bd46-4dc1-82aa-6b1db7622176/kube-rbac-proxy/0.log" Dec 15 06:33:38 crc kubenswrapper[4747]: I1215 06:33:38.720976 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-6769fb99d-6b7xv_e1f942fd-4913-456f-b28c-463fd3c2759e/nmstate-operator/0.log" Dec 15 06:33:38 crc kubenswrapper[4747]: I1215 06:33:38.754536 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-f8fb84555-fns29_37453676-6389-4503-b1dc-9afdbd759c64/nmstate-webhook/0.log" Dec 15 06:33:48 crc kubenswrapper[4747]: I1215 06:33:48.629529 4747 scope.go:117] "RemoveContainer" containerID="44cd12cf9c71b73bd3a2782f4e6fd9beb9278b65a8ace12d37e33f7b4a372d97" Dec 15 06:33:48 crc kubenswrapper[4747]: E1215 06:33:48.630704 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:33:51 crc kubenswrapper[4747]: I1215 06:33:51.486863 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-84gxw_2fd21575-3653-416d-a59a-d2802bc9bf09/kube-rbac-proxy/0.log" Dec 15 06:33:51 crc kubenswrapper[4747]: I1215 06:33:51.615425 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-84gxw_2fd21575-3653-416d-a59a-d2802bc9bf09/controller/0.log" Dec 15 06:33:51 crc kubenswrapper[4747]: I1215 06:33:51.837079 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d98xw_cf41786d-c244-4754-ba59-4a9b6c834f9f/cp-frr-files/0.log" Dec 15 06:33:52 crc kubenswrapper[4747]: I1215 06:33:52.003696 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d98xw_cf41786d-c244-4754-ba59-4a9b6c834f9f/cp-frr-files/0.log" Dec 15 06:33:52 crc kubenswrapper[4747]: I1215 06:33:52.003995 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d98xw_cf41786d-c244-4754-ba59-4a9b6c834f9f/cp-metrics/0.log" Dec 15 06:33:52 crc kubenswrapper[4747]: I1215 06:33:52.046246 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d98xw_cf41786d-c244-4754-ba59-4a9b6c834f9f/cp-reloader/0.log" Dec 15 06:33:52 crc kubenswrapper[4747]: I1215 06:33:52.046564 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d98xw_cf41786d-c244-4754-ba59-4a9b6c834f9f/cp-reloader/0.log" Dec 15 06:33:52 crc kubenswrapper[4747]: I1215 06:33:52.226879 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d98xw_cf41786d-c244-4754-ba59-4a9b6c834f9f/cp-reloader/0.log" Dec 15 06:33:52 crc kubenswrapper[4747]: I1215 06:33:52.244490 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d98xw_cf41786d-c244-4754-ba59-4a9b6c834f9f/cp-frr-files/0.log" Dec 15 06:33:52 crc kubenswrapper[4747]: I1215 06:33:52.267459 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d98xw_cf41786d-c244-4754-ba59-4a9b6c834f9f/cp-metrics/0.log" Dec 15 06:33:52 crc kubenswrapper[4747]: I1215 06:33:52.267664 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d98xw_cf41786d-c244-4754-ba59-4a9b6c834f9f/cp-metrics/0.log" Dec 15 06:33:52 crc kubenswrapper[4747]: I1215 06:33:52.415882 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d98xw_cf41786d-c244-4754-ba59-4a9b6c834f9f/cp-metrics/0.log" Dec 15 06:33:52 crc kubenswrapper[4747]: I1215 06:33:52.425213 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d98xw_cf41786d-c244-4754-ba59-4a9b6c834f9f/cp-frr-files/0.log" Dec 15 06:33:52 crc kubenswrapper[4747]: I1215 06:33:52.426792 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d98xw_cf41786d-c244-4754-ba59-4a9b6c834f9f/cp-reloader/0.log" Dec 15 06:33:52 crc kubenswrapper[4747]: I1215 06:33:52.448422 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d98xw_cf41786d-c244-4754-ba59-4a9b6c834f9f/controller/0.log" Dec 15 06:33:52 crc kubenswrapper[4747]: I1215 06:33:52.599559 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d98xw_cf41786d-c244-4754-ba59-4a9b6c834f9f/frr-metrics/0.log" Dec 15 06:33:52 crc kubenswrapper[4747]: I1215 06:33:52.633268 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d98xw_cf41786d-c244-4754-ba59-4a9b6c834f9f/kube-rbac-proxy/0.log" Dec 15 06:33:52 crc kubenswrapper[4747]: I1215 06:33:52.644619 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d98xw_cf41786d-c244-4754-ba59-4a9b6c834f9f/kube-rbac-proxy-frr/0.log" Dec 15 06:33:52 crc kubenswrapper[4747]: I1215 06:33:52.832039 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d98xw_cf41786d-c244-4754-ba59-4a9b6c834f9f/reloader/0.log" Dec 15 06:33:52 crc kubenswrapper[4747]: I1215 06:33:52.912814 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7784b6fcf-szbwq_7b46267f-c728-4995-9817-87b793f77a58/frr-k8s-webhook-server/0.log" Dec 15 06:33:53 crc kubenswrapper[4747]: I1215 06:33:53.064365 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-75d987bf4c-9srrp_adb7b699-78a1-41ed-a24f-2c57a128568e/manager/0.log" Dec 15 06:33:53 crc kubenswrapper[4747]: I1215 06:33:53.280271 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5c5c75c884-t8trz_3a13fc24-266f-433f-bbff-0cd3d1fc29fc/webhook-server/0.log" Dec 15 06:33:53 crc kubenswrapper[4747]: I1215 06:33:53.316554 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-rtnx5_b943df0c-29b6-42f3-884b-707aaf02c5d0/kube-rbac-proxy/0.log" Dec 15 06:33:53 crc kubenswrapper[4747]: I1215 06:33:53.621076 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d98xw_cf41786d-c244-4754-ba59-4a9b6c834f9f/frr/0.log" Dec 15 06:33:53 crc kubenswrapper[4747]: I1215 06:33:53.749662 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-rtnx5_b943df0c-29b6-42f3-884b-707aaf02c5d0/speaker/0.log" Dec 15 06:34:00 crc kubenswrapper[4747]: I1215 06:34:00.629584 4747 scope.go:117] "RemoveContainer" containerID="44cd12cf9c71b73bd3a2782f4e6fd9beb9278b65a8ace12d37e33f7b4a372d97" Dec 15 06:34:00 crc kubenswrapper[4747]: E1215 06:34:00.630494 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:34:04 crc kubenswrapper[4747]: I1215 06:34:04.444854 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4sdz5f_5049975b-44d3-44ef-98d8-94691dcb042f/util/0.log" Dec 15 06:34:04 crc kubenswrapper[4747]: I1215 06:34:04.612986 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4sdz5f_5049975b-44d3-44ef-98d8-94691dcb042f/util/0.log" Dec 15 06:34:04 crc kubenswrapper[4747]: I1215 06:34:04.632861 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4sdz5f_5049975b-44d3-44ef-98d8-94691dcb042f/pull/0.log" Dec 15 06:34:04 crc kubenswrapper[4747]: I1215 06:34:04.674816 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4sdz5f_5049975b-44d3-44ef-98d8-94691dcb042f/pull/0.log" Dec 15 06:34:04 crc kubenswrapper[4747]: I1215 06:34:04.811613 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4sdz5f_5049975b-44d3-44ef-98d8-94691dcb042f/util/0.log" Dec 15 06:34:04 crc kubenswrapper[4747]: I1215 06:34:04.816518 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4sdz5f_5049975b-44d3-44ef-98d8-94691dcb042f/pull/0.log" Dec 15 06:34:04 crc kubenswrapper[4747]: I1215 06:34:04.827322 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4sdz5f_5049975b-44d3-44ef-98d8-94691dcb042f/extract/0.log" Dec 15 06:34:04 crc kubenswrapper[4747]: I1215 06:34:04.952636 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh_30696d2b-dd70-4eb7-88c1-9bc23b39c07c/util/0.log" Dec 15 06:34:05 crc kubenswrapper[4747]: I1215 06:34:05.091711 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh_30696d2b-dd70-4eb7-88c1-9bc23b39c07c/util/0.log" Dec 15 06:34:05 crc kubenswrapper[4747]: I1215 06:34:05.109858 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh_30696d2b-dd70-4eb7-88c1-9bc23b39c07c/pull/0.log" Dec 15 06:34:05 crc kubenswrapper[4747]: I1215 06:34:05.144060 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh_30696d2b-dd70-4eb7-88c1-9bc23b39c07c/pull/0.log" Dec 15 06:34:05 crc kubenswrapper[4747]: I1215 06:34:05.274488 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh_30696d2b-dd70-4eb7-88c1-9bc23b39c07c/extract/0.log" Dec 15 06:34:05 crc kubenswrapper[4747]: I1215 06:34:05.277702 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh_30696d2b-dd70-4eb7-88c1-9bc23b39c07c/util/0.log" Dec 15 06:34:05 crc kubenswrapper[4747]: I1215 06:34:05.296715 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh_30696d2b-dd70-4eb7-88c1-9bc23b39c07c/pull/0.log" Dec 15 06:34:05 crc kubenswrapper[4747]: I1215 06:34:05.482339 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vzrzt_a54ad897-346d-40bf-8b62-df432709d572/extract-utilities/0.log" Dec 15 06:34:05 crc kubenswrapper[4747]: I1215 06:34:05.719362 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vzrzt_a54ad897-346d-40bf-8b62-df432709d572/extract-utilities/0.log" Dec 15 06:34:05 crc kubenswrapper[4747]: I1215 06:34:05.731696 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vzrzt_a54ad897-346d-40bf-8b62-df432709d572/extract-content/0.log" Dec 15 06:34:05 crc kubenswrapper[4747]: I1215 06:34:05.744952 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vzrzt_a54ad897-346d-40bf-8b62-df432709d572/extract-content/0.log" Dec 15 06:34:05 crc kubenswrapper[4747]: I1215 06:34:05.886985 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vzrzt_a54ad897-346d-40bf-8b62-df432709d572/extract-utilities/0.log" Dec 15 06:34:05 crc kubenswrapper[4747]: I1215 06:34:05.899581 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vzrzt_a54ad897-346d-40bf-8b62-df432709d572/extract-content/0.log" Dec 15 06:34:06 crc kubenswrapper[4747]: I1215 06:34:06.097606 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dljqm_61ed4e5e-9fba-404a-8e7e-e231ee5d7134/extract-utilities/0.log" Dec 15 06:34:06 crc kubenswrapper[4747]: I1215 06:34:06.267707 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dljqm_61ed4e5e-9fba-404a-8e7e-e231ee5d7134/extract-utilities/0.log" Dec 15 06:34:06 crc kubenswrapper[4747]: I1215 06:34:06.270160 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dljqm_61ed4e5e-9fba-404a-8e7e-e231ee5d7134/extract-content/0.log" Dec 15 06:34:06 crc kubenswrapper[4747]: I1215 06:34:06.305537 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vzrzt_a54ad897-346d-40bf-8b62-df432709d572/registry-server/0.log" Dec 15 06:34:06 crc kubenswrapper[4747]: I1215 06:34:06.316751 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dljqm_61ed4e5e-9fba-404a-8e7e-e231ee5d7134/extract-content/0.log" Dec 15 06:34:06 crc kubenswrapper[4747]: I1215 06:34:06.430040 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dljqm_61ed4e5e-9fba-404a-8e7e-e231ee5d7134/extract-utilities/0.log" Dec 15 06:34:06 crc kubenswrapper[4747]: I1215 06:34:06.492263 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dljqm_61ed4e5e-9fba-404a-8e7e-e231ee5d7134/extract-content/0.log" Dec 15 06:34:06 crc kubenswrapper[4747]: I1215 06:34:06.622691 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-qr5lt_f22206aa-87c5-4c96-b146-53b0890697fa/marketplace-operator/0.log" Dec 15 06:34:06 crc kubenswrapper[4747]: I1215 06:34:06.711274 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r755m_efb301de-15d1-452a-b8e9-10296872545b/extract-utilities/0.log" Dec 15 06:34:06 crc kubenswrapper[4747]: I1215 06:34:06.726320 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dljqm_61ed4e5e-9fba-404a-8e7e-e231ee5d7134/registry-server/0.log" Dec 15 06:34:06 crc kubenswrapper[4747]: I1215 06:34:06.879329 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r755m_efb301de-15d1-452a-b8e9-10296872545b/extract-content/0.log" Dec 15 06:34:06 crc kubenswrapper[4747]: I1215 06:34:06.885173 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r755m_efb301de-15d1-452a-b8e9-10296872545b/extract-content/0.log" Dec 15 06:34:06 crc kubenswrapper[4747]: I1215 06:34:06.887078 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r755m_efb301de-15d1-452a-b8e9-10296872545b/extract-utilities/0.log" Dec 15 06:34:07 crc kubenswrapper[4747]: I1215 06:34:07.026881 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r755m_efb301de-15d1-452a-b8e9-10296872545b/extract-content/0.log" Dec 15 06:34:07 crc kubenswrapper[4747]: I1215 06:34:07.035177 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r755m_efb301de-15d1-452a-b8e9-10296872545b/extract-utilities/0.log" Dec 15 06:34:07 crc kubenswrapper[4747]: I1215 06:34:07.249360 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r755m_efb301de-15d1-452a-b8e9-10296872545b/registry-server/0.log" Dec 15 06:34:07 crc kubenswrapper[4747]: I1215 06:34:07.251043 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lbw8g_e2652452-9d91-4f09-9422-fa69bed43b9e/extract-utilities/0.log" Dec 15 06:34:07 crc kubenswrapper[4747]: I1215 06:34:07.333676 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lbw8g_e2652452-9d91-4f09-9422-fa69bed43b9e/extract-content/0.log" Dec 15 06:34:07 crc kubenswrapper[4747]: I1215 06:34:07.357380 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lbw8g_e2652452-9d91-4f09-9422-fa69bed43b9e/extract-utilities/0.log" Dec 15 06:34:07 crc kubenswrapper[4747]: I1215 06:34:07.437419 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lbw8g_e2652452-9d91-4f09-9422-fa69bed43b9e/extract-content/0.log" Dec 15 06:34:07 crc kubenswrapper[4747]: I1215 06:34:07.547251 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lbw8g_e2652452-9d91-4f09-9422-fa69bed43b9e/extract-content/0.log" Dec 15 06:34:07 crc kubenswrapper[4747]: I1215 06:34:07.579204 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lbw8g_e2652452-9d91-4f09-9422-fa69bed43b9e/extract-utilities/0.log" Dec 15 06:34:07 crc kubenswrapper[4747]: I1215 06:34:07.859488 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lbw8g_e2652452-9d91-4f09-9422-fa69bed43b9e/registry-server/0.log" Dec 15 06:34:15 crc kubenswrapper[4747]: I1215 06:34:15.629422 4747 scope.go:117] "RemoveContainer" containerID="44cd12cf9c71b73bd3a2782f4e6fd9beb9278b65a8ace12d37e33f7b4a372d97" Dec 15 06:34:15 crc kubenswrapper[4747]: E1215 06:34:15.630424 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:34:27 crc kubenswrapper[4747]: E1215 06:34:27.746724 4747 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.25.116:33466->192.168.25.116:34815: write tcp 192.168.25.116:33466->192.168.25.116:34815: write: broken pipe Dec 15 06:34:29 crc kubenswrapper[4747]: I1215 06:34:29.629144 4747 scope.go:117] "RemoveContainer" containerID="44cd12cf9c71b73bd3a2782f4e6fd9beb9278b65a8ace12d37e33f7b4a372d97" Dec 15 06:34:30 crc kubenswrapper[4747]: I1215 06:34:30.056956 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" event={"ID":"1d50e5c9-7ce9-40c0-b942-01031654d27c","Type":"ContainerStarted","Data":"e21c8b075436a10f5cd74cbf56c983328815d2f43d9184bb57bbdc7f8ffd76c1"} Dec 15 06:34:38 crc kubenswrapper[4747]: I1215 06:34:38.888433 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sjn4t"] Dec 15 06:34:38 crc kubenswrapper[4747]: E1215 06:34:38.889388 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f8ff5e0-398a-43ca-a292-6da4ef44b19c" containerName="extract-utilities" Dec 15 06:34:38 crc kubenswrapper[4747]: I1215 06:34:38.889402 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f8ff5e0-398a-43ca-a292-6da4ef44b19c" containerName="extract-utilities" Dec 15 06:34:38 crc kubenswrapper[4747]: E1215 06:34:38.889436 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f8ff5e0-398a-43ca-a292-6da4ef44b19c" containerName="extract-content" Dec 15 06:34:38 crc kubenswrapper[4747]: I1215 06:34:38.889443 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f8ff5e0-398a-43ca-a292-6da4ef44b19c" containerName="extract-content" Dec 15 06:34:38 crc kubenswrapper[4747]: E1215 06:34:38.889452 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f8ff5e0-398a-43ca-a292-6da4ef44b19c" containerName="registry-server" Dec 15 06:34:38 crc kubenswrapper[4747]: I1215 06:34:38.889458 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f8ff5e0-398a-43ca-a292-6da4ef44b19c" containerName="registry-server" Dec 15 06:34:38 crc kubenswrapper[4747]: I1215 06:34:38.889639 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f8ff5e0-398a-43ca-a292-6da4ef44b19c" containerName="registry-server" Dec 15 06:34:38 crc kubenswrapper[4747]: I1215 06:34:38.890823 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sjn4t" Dec 15 06:34:38 crc kubenswrapper[4747]: I1215 06:34:38.902426 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sjn4t"] Dec 15 06:34:39 crc kubenswrapper[4747]: I1215 06:34:39.014451 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa8bbb43-a627-46d2-9062-e5ff50ae7d8f-utilities\") pod \"redhat-marketplace-sjn4t\" (UID: \"fa8bbb43-a627-46d2-9062-e5ff50ae7d8f\") " pod="openshift-marketplace/redhat-marketplace-sjn4t" Dec 15 06:34:39 crc kubenswrapper[4747]: I1215 06:34:39.014824 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa8bbb43-a627-46d2-9062-e5ff50ae7d8f-catalog-content\") pod \"redhat-marketplace-sjn4t\" (UID: \"fa8bbb43-a627-46d2-9062-e5ff50ae7d8f\") " pod="openshift-marketplace/redhat-marketplace-sjn4t" Dec 15 06:34:39 crc kubenswrapper[4747]: I1215 06:34:39.014884 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdgss\" (UniqueName: \"kubernetes.io/projected/fa8bbb43-a627-46d2-9062-e5ff50ae7d8f-kube-api-access-pdgss\") pod \"redhat-marketplace-sjn4t\" (UID: \"fa8bbb43-a627-46d2-9062-e5ff50ae7d8f\") " pod="openshift-marketplace/redhat-marketplace-sjn4t" Dec 15 06:34:39 crc kubenswrapper[4747]: I1215 06:34:39.117245 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa8bbb43-a627-46d2-9062-e5ff50ae7d8f-utilities\") pod \"redhat-marketplace-sjn4t\" (UID: \"fa8bbb43-a627-46d2-9062-e5ff50ae7d8f\") " pod="openshift-marketplace/redhat-marketplace-sjn4t" Dec 15 06:34:39 crc kubenswrapper[4747]: I1215 06:34:39.117349 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa8bbb43-a627-46d2-9062-e5ff50ae7d8f-catalog-content\") pod \"redhat-marketplace-sjn4t\" (UID: \"fa8bbb43-a627-46d2-9062-e5ff50ae7d8f\") " pod="openshift-marketplace/redhat-marketplace-sjn4t" Dec 15 06:34:39 crc kubenswrapper[4747]: I1215 06:34:39.117395 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdgss\" (UniqueName: \"kubernetes.io/projected/fa8bbb43-a627-46d2-9062-e5ff50ae7d8f-kube-api-access-pdgss\") pod \"redhat-marketplace-sjn4t\" (UID: \"fa8bbb43-a627-46d2-9062-e5ff50ae7d8f\") " pod="openshift-marketplace/redhat-marketplace-sjn4t" Dec 15 06:34:39 crc kubenswrapper[4747]: I1215 06:34:39.118128 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa8bbb43-a627-46d2-9062-e5ff50ae7d8f-utilities\") pod \"redhat-marketplace-sjn4t\" (UID: \"fa8bbb43-a627-46d2-9062-e5ff50ae7d8f\") " pod="openshift-marketplace/redhat-marketplace-sjn4t" Dec 15 06:34:39 crc kubenswrapper[4747]: I1215 06:34:39.118133 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa8bbb43-a627-46d2-9062-e5ff50ae7d8f-catalog-content\") pod \"redhat-marketplace-sjn4t\" (UID: \"fa8bbb43-a627-46d2-9062-e5ff50ae7d8f\") " pod="openshift-marketplace/redhat-marketplace-sjn4t" Dec 15 06:34:39 crc kubenswrapper[4747]: I1215 06:34:39.140301 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdgss\" (UniqueName: \"kubernetes.io/projected/fa8bbb43-a627-46d2-9062-e5ff50ae7d8f-kube-api-access-pdgss\") pod \"redhat-marketplace-sjn4t\" (UID: \"fa8bbb43-a627-46d2-9062-e5ff50ae7d8f\") " pod="openshift-marketplace/redhat-marketplace-sjn4t" Dec 15 06:34:39 crc kubenswrapper[4747]: I1215 06:34:39.211621 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sjn4t" Dec 15 06:34:40 crc kubenswrapper[4747]: I1215 06:34:40.099296 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sjn4t"] Dec 15 06:34:40 crc kubenswrapper[4747]: I1215 06:34:40.150305 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sjn4t" event={"ID":"fa8bbb43-a627-46d2-9062-e5ff50ae7d8f","Type":"ContainerStarted","Data":"2975bead1e845d256769b91026102c0d9ee1a0fa625b9ef217f392be9f0e7d81"} Dec 15 06:34:41 crc kubenswrapper[4747]: I1215 06:34:41.159387 4747 generic.go:334] "Generic (PLEG): container finished" podID="fa8bbb43-a627-46d2-9062-e5ff50ae7d8f" containerID="3f7be6eca748d69921003a4861334ce13be7b36a4fcabafd709cd21f87efd152" exitCode=0 Dec 15 06:34:41 crc kubenswrapper[4747]: I1215 06:34:41.159634 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sjn4t" event={"ID":"fa8bbb43-a627-46d2-9062-e5ff50ae7d8f","Type":"ContainerDied","Data":"3f7be6eca748d69921003a4861334ce13be7b36a4fcabafd709cd21f87efd152"} Dec 15 06:34:41 crc kubenswrapper[4747]: I1215 06:34:41.294128 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mqgpb"] Dec 15 06:34:41 crc kubenswrapper[4747]: I1215 06:34:41.297348 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mqgpb" Dec 15 06:34:41 crc kubenswrapper[4747]: I1215 06:34:41.311005 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mqgpb"] Dec 15 06:34:41 crc kubenswrapper[4747]: I1215 06:34:41.388345 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jqzx\" (UniqueName: \"kubernetes.io/projected/8eb79cb0-8250-4863-b16b-0bf4f055bbc1-kube-api-access-7jqzx\") pod \"community-operators-mqgpb\" (UID: \"8eb79cb0-8250-4863-b16b-0bf4f055bbc1\") " pod="openshift-marketplace/community-operators-mqgpb" Dec 15 06:34:41 crc kubenswrapper[4747]: I1215 06:34:41.388467 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eb79cb0-8250-4863-b16b-0bf4f055bbc1-utilities\") pod \"community-operators-mqgpb\" (UID: \"8eb79cb0-8250-4863-b16b-0bf4f055bbc1\") " pod="openshift-marketplace/community-operators-mqgpb" Dec 15 06:34:41 crc kubenswrapper[4747]: I1215 06:34:41.388519 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eb79cb0-8250-4863-b16b-0bf4f055bbc1-catalog-content\") pod \"community-operators-mqgpb\" (UID: \"8eb79cb0-8250-4863-b16b-0bf4f055bbc1\") " pod="openshift-marketplace/community-operators-mqgpb" Dec 15 06:34:41 crc kubenswrapper[4747]: I1215 06:34:41.490798 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eb79cb0-8250-4863-b16b-0bf4f055bbc1-catalog-content\") pod \"community-operators-mqgpb\" (UID: \"8eb79cb0-8250-4863-b16b-0bf4f055bbc1\") " pod="openshift-marketplace/community-operators-mqgpb" Dec 15 06:34:41 crc kubenswrapper[4747]: I1215 06:34:41.491239 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eb79cb0-8250-4863-b16b-0bf4f055bbc1-catalog-content\") pod \"community-operators-mqgpb\" (UID: \"8eb79cb0-8250-4863-b16b-0bf4f055bbc1\") " pod="openshift-marketplace/community-operators-mqgpb" Dec 15 06:34:41 crc kubenswrapper[4747]: I1215 06:34:41.491457 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jqzx\" (UniqueName: \"kubernetes.io/projected/8eb79cb0-8250-4863-b16b-0bf4f055bbc1-kube-api-access-7jqzx\") pod \"community-operators-mqgpb\" (UID: \"8eb79cb0-8250-4863-b16b-0bf4f055bbc1\") " pod="openshift-marketplace/community-operators-mqgpb" Dec 15 06:34:41 crc kubenswrapper[4747]: I1215 06:34:41.491599 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eb79cb0-8250-4863-b16b-0bf4f055bbc1-utilities\") pod \"community-operators-mqgpb\" (UID: \"8eb79cb0-8250-4863-b16b-0bf4f055bbc1\") " pod="openshift-marketplace/community-operators-mqgpb" Dec 15 06:34:41 crc kubenswrapper[4747]: I1215 06:34:41.492060 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eb79cb0-8250-4863-b16b-0bf4f055bbc1-utilities\") pod \"community-operators-mqgpb\" (UID: \"8eb79cb0-8250-4863-b16b-0bf4f055bbc1\") " pod="openshift-marketplace/community-operators-mqgpb" Dec 15 06:34:41 crc kubenswrapper[4747]: I1215 06:34:41.514727 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jqzx\" (UniqueName: \"kubernetes.io/projected/8eb79cb0-8250-4863-b16b-0bf4f055bbc1-kube-api-access-7jqzx\") pod \"community-operators-mqgpb\" (UID: \"8eb79cb0-8250-4863-b16b-0bf4f055bbc1\") " pod="openshift-marketplace/community-operators-mqgpb" Dec 15 06:34:41 crc kubenswrapper[4747]: I1215 06:34:41.621617 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mqgpb" Dec 15 06:34:42 crc kubenswrapper[4747]: W1215 06:34:42.066858 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8eb79cb0_8250_4863_b16b_0bf4f055bbc1.slice/crio-5bbd58c4d5a9d5a07c7380f356639bc8a5f707dc213a25a66fef0765fe63b3a9 WatchSource:0}: Error finding container 5bbd58c4d5a9d5a07c7380f356639bc8a5f707dc213a25a66fef0765fe63b3a9: Status 404 returned error can't find the container with id 5bbd58c4d5a9d5a07c7380f356639bc8a5f707dc213a25a66fef0765fe63b3a9 Dec 15 06:34:42 crc kubenswrapper[4747]: I1215 06:34:42.070979 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mqgpb"] Dec 15 06:34:42 crc kubenswrapper[4747]: I1215 06:34:42.169029 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mqgpb" event={"ID":"8eb79cb0-8250-4863-b16b-0bf4f055bbc1","Type":"ContainerStarted","Data":"5bbd58c4d5a9d5a07c7380f356639bc8a5f707dc213a25a66fef0765fe63b3a9"} Dec 15 06:34:43 crc kubenswrapper[4747]: I1215 06:34:43.178655 4747 generic.go:334] "Generic (PLEG): container finished" podID="fa8bbb43-a627-46d2-9062-e5ff50ae7d8f" containerID="482b717af81e2f135c28de5bb9111d6762b330c4f3f8b54c62912301b5c1d2a6" exitCode=0 Dec 15 06:34:43 crc kubenswrapper[4747]: I1215 06:34:43.178972 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sjn4t" event={"ID":"fa8bbb43-a627-46d2-9062-e5ff50ae7d8f","Type":"ContainerDied","Data":"482b717af81e2f135c28de5bb9111d6762b330c4f3f8b54c62912301b5c1d2a6"} Dec 15 06:34:43 crc kubenswrapper[4747]: I1215 06:34:43.181713 4747 generic.go:334] "Generic (PLEG): container finished" podID="8eb79cb0-8250-4863-b16b-0bf4f055bbc1" containerID="17fef5193ca71ca381c2fc28d213be4ea1d8d946f7aa84d28f509ec7d096eae5" exitCode=0 Dec 15 06:34:43 crc kubenswrapper[4747]: I1215 06:34:43.181763 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mqgpb" event={"ID":"8eb79cb0-8250-4863-b16b-0bf4f055bbc1","Type":"ContainerDied","Data":"17fef5193ca71ca381c2fc28d213be4ea1d8d946f7aa84d28f509ec7d096eae5"} Dec 15 06:34:44 crc kubenswrapper[4747]: I1215 06:34:44.204590 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sjn4t" event={"ID":"fa8bbb43-a627-46d2-9062-e5ff50ae7d8f","Type":"ContainerStarted","Data":"e98e834f8bf93ce97cd311418934fdefd118f68110512b3ebc9596dd15ffb4dd"} Dec 15 06:34:44 crc kubenswrapper[4747]: I1215 06:34:44.233944 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sjn4t" podStartSLOduration=3.7193863499999997 podStartE2EDuration="6.233899326s" podCreationTimestamp="2025-12-15 06:34:38 +0000 UTC" firstStartedPulling="2025-12-15 06:34:41.161638459 +0000 UTC m=+3444.858150375" lastFinishedPulling="2025-12-15 06:34:43.676151434 +0000 UTC m=+3447.372663351" observedRunningTime="2025-12-15 06:34:44.227853639 +0000 UTC m=+3447.924365556" watchObservedRunningTime="2025-12-15 06:34:44.233899326 +0000 UTC m=+3447.930411244" Dec 15 06:34:45 crc kubenswrapper[4747]: I1215 06:34:45.215508 4747 generic.go:334] "Generic (PLEG): container finished" podID="8eb79cb0-8250-4863-b16b-0bf4f055bbc1" containerID="a32cecaec591afd6245bbca30978b5a29240cb3e4599f99fc9d83246d8210603" exitCode=0 Dec 15 06:34:45 crc kubenswrapper[4747]: I1215 06:34:45.216981 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mqgpb" event={"ID":"8eb79cb0-8250-4863-b16b-0bf4f055bbc1","Type":"ContainerDied","Data":"a32cecaec591afd6245bbca30978b5a29240cb3e4599f99fc9d83246d8210603"} Dec 15 06:34:46 crc kubenswrapper[4747]: I1215 06:34:46.225826 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mqgpb" event={"ID":"8eb79cb0-8250-4863-b16b-0bf4f055bbc1","Type":"ContainerStarted","Data":"fa8df285aa8b0a75f6afc13c6293f48442a0c0110449a0d786b94438fd313c7f"} Dec 15 06:34:46 crc kubenswrapper[4747]: I1215 06:34:46.242867 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mqgpb" podStartSLOduration=2.60657598 podStartE2EDuration="5.242852594s" podCreationTimestamp="2025-12-15 06:34:41 +0000 UTC" firstStartedPulling="2025-12-15 06:34:43.184358067 +0000 UTC m=+3446.880869985" lastFinishedPulling="2025-12-15 06:34:45.820634682 +0000 UTC m=+3449.517146599" observedRunningTime="2025-12-15 06:34:46.241774567 +0000 UTC m=+3449.938286483" watchObservedRunningTime="2025-12-15 06:34:46.242852594 +0000 UTC m=+3449.939364510" Dec 15 06:34:49 crc kubenswrapper[4747]: I1215 06:34:49.213098 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sjn4t" Dec 15 06:34:49 crc kubenswrapper[4747]: I1215 06:34:49.213534 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sjn4t" Dec 15 06:34:49 crc kubenswrapper[4747]: I1215 06:34:49.255049 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sjn4t" Dec 15 06:34:49 crc kubenswrapper[4747]: I1215 06:34:49.293693 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sjn4t" Dec 15 06:34:50 crc kubenswrapper[4747]: I1215 06:34:50.084355 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sjn4t"] Dec 15 06:34:51 crc kubenswrapper[4747]: I1215 06:34:51.273240 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sjn4t" podUID="fa8bbb43-a627-46d2-9062-e5ff50ae7d8f" containerName="registry-server" containerID="cri-o://e98e834f8bf93ce97cd311418934fdefd118f68110512b3ebc9596dd15ffb4dd" gracePeriod=2 Dec 15 06:34:51 crc kubenswrapper[4747]: I1215 06:34:51.622153 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mqgpb" Dec 15 06:34:51 crc kubenswrapper[4747]: I1215 06:34:51.622211 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mqgpb" Dec 15 06:34:51 crc kubenswrapper[4747]: I1215 06:34:51.670581 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mqgpb" Dec 15 06:34:51 crc kubenswrapper[4747]: I1215 06:34:51.719494 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sjn4t" Dec 15 06:34:51 crc kubenswrapper[4747]: I1215 06:34:51.804348 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdgss\" (UniqueName: \"kubernetes.io/projected/fa8bbb43-a627-46d2-9062-e5ff50ae7d8f-kube-api-access-pdgss\") pod \"fa8bbb43-a627-46d2-9062-e5ff50ae7d8f\" (UID: \"fa8bbb43-a627-46d2-9062-e5ff50ae7d8f\") " Dec 15 06:34:51 crc kubenswrapper[4747]: I1215 06:34:51.804476 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa8bbb43-a627-46d2-9062-e5ff50ae7d8f-utilities\") pod \"fa8bbb43-a627-46d2-9062-e5ff50ae7d8f\" (UID: \"fa8bbb43-a627-46d2-9062-e5ff50ae7d8f\") " Dec 15 06:34:51 crc kubenswrapper[4747]: I1215 06:34:51.804523 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa8bbb43-a627-46d2-9062-e5ff50ae7d8f-catalog-content\") pod \"fa8bbb43-a627-46d2-9062-e5ff50ae7d8f\" (UID: \"fa8bbb43-a627-46d2-9062-e5ff50ae7d8f\") " Dec 15 06:34:51 crc kubenswrapper[4747]: I1215 06:34:51.805284 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa8bbb43-a627-46d2-9062-e5ff50ae7d8f-utilities" (OuterVolumeSpecName: "utilities") pod "fa8bbb43-a627-46d2-9062-e5ff50ae7d8f" (UID: "fa8bbb43-a627-46d2-9062-e5ff50ae7d8f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 06:34:51 crc kubenswrapper[4747]: I1215 06:34:51.811037 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa8bbb43-a627-46d2-9062-e5ff50ae7d8f-kube-api-access-pdgss" (OuterVolumeSpecName: "kube-api-access-pdgss") pod "fa8bbb43-a627-46d2-9062-e5ff50ae7d8f" (UID: "fa8bbb43-a627-46d2-9062-e5ff50ae7d8f"). InnerVolumeSpecName "kube-api-access-pdgss". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 06:34:51 crc kubenswrapper[4747]: I1215 06:34:51.837942 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa8bbb43-a627-46d2-9062-e5ff50ae7d8f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fa8bbb43-a627-46d2-9062-e5ff50ae7d8f" (UID: "fa8bbb43-a627-46d2-9062-e5ff50ae7d8f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 06:34:51 crc kubenswrapper[4747]: I1215 06:34:51.907169 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdgss\" (UniqueName: \"kubernetes.io/projected/fa8bbb43-a627-46d2-9062-e5ff50ae7d8f-kube-api-access-pdgss\") on node \"crc\" DevicePath \"\"" Dec 15 06:34:51 crc kubenswrapper[4747]: I1215 06:34:51.907203 4747 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa8bbb43-a627-46d2-9062-e5ff50ae7d8f-utilities\") on node \"crc\" DevicePath \"\"" Dec 15 06:34:51 crc kubenswrapper[4747]: I1215 06:34:51.907215 4747 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa8bbb43-a627-46d2-9062-e5ff50ae7d8f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 15 06:34:52 crc kubenswrapper[4747]: I1215 06:34:52.284988 4747 generic.go:334] "Generic (PLEG): container finished" podID="fa8bbb43-a627-46d2-9062-e5ff50ae7d8f" containerID="e98e834f8bf93ce97cd311418934fdefd118f68110512b3ebc9596dd15ffb4dd" exitCode=0 Dec 15 06:34:52 crc kubenswrapper[4747]: I1215 06:34:52.285101 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sjn4t" Dec 15 06:34:52 crc kubenswrapper[4747]: I1215 06:34:52.285230 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sjn4t" event={"ID":"fa8bbb43-a627-46d2-9062-e5ff50ae7d8f","Type":"ContainerDied","Data":"e98e834f8bf93ce97cd311418934fdefd118f68110512b3ebc9596dd15ffb4dd"} Dec 15 06:34:52 crc kubenswrapper[4747]: I1215 06:34:52.285307 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sjn4t" event={"ID":"fa8bbb43-a627-46d2-9062-e5ff50ae7d8f","Type":"ContainerDied","Data":"2975bead1e845d256769b91026102c0d9ee1a0fa625b9ef217f392be9f0e7d81"} Dec 15 06:34:52 crc kubenswrapper[4747]: I1215 06:34:52.285339 4747 scope.go:117] "RemoveContainer" containerID="e98e834f8bf93ce97cd311418934fdefd118f68110512b3ebc9596dd15ffb4dd" Dec 15 06:34:52 crc kubenswrapper[4747]: I1215 06:34:52.303545 4747 scope.go:117] "RemoveContainer" containerID="482b717af81e2f135c28de5bb9111d6762b330c4f3f8b54c62912301b5c1d2a6" Dec 15 06:34:52 crc kubenswrapper[4747]: I1215 06:34:52.327339 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mqgpb" Dec 15 06:34:52 crc kubenswrapper[4747]: I1215 06:34:52.330630 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sjn4t"] Dec 15 06:34:52 crc kubenswrapper[4747]: I1215 06:34:52.331338 4747 scope.go:117] "RemoveContainer" containerID="3f7be6eca748d69921003a4861334ce13be7b36a4fcabafd709cd21f87efd152" Dec 15 06:34:52 crc kubenswrapper[4747]: I1215 06:34:52.342644 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sjn4t"] Dec 15 06:34:52 crc kubenswrapper[4747]: I1215 06:34:52.377483 4747 scope.go:117] "RemoveContainer" containerID="e98e834f8bf93ce97cd311418934fdefd118f68110512b3ebc9596dd15ffb4dd" Dec 15 06:34:52 crc kubenswrapper[4747]: E1215 06:34:52.377939 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e98e834f8bf93ce97cd311418934fdefd118f68110512b3ebc9596dd15ffb4dd\": container with ID starting with e98e834f8bf93ce97cd311418934fdefd118f68110512b3ebc9596dd15ffb4dd not found: ID does not exist" containerID="e98e834f8bf93ce97cd311418934fdefd118f68110512b3ebc9596dd15ffb4dd" Dec 15 06:34:52 crc kubenswrapper[4747]: I1215 06:34:52.377986 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e98e834f8bf93ce97cd311418934fdefd118f68110512b3ebc9596dd15ffb4dd"} err="failed to get container status \"e98e834f8bf93ce97cd311418934fdefd118f68110512b3ebc9596dd15ffb4dd\": rpc error: code = NotFound desc = could not find container \"e98e834f8bf93ce97cd311418934fdefd118f68110512b3ebc9596dd15ffb4dd\": container with ID starting with e98e834f8bf93ce97cd311418934fdefd118f68110512b3ebc9596dd15ffb4dd not found: ID does not exist" Dec 15 06:34:52 crc kubenswrapper[4747]: I1215 06:34:52.378014 4747 scope.go:117] "RemoveContainer" containerID="482b717af81e2f135c28de5bb9111d6762b330c4f3f8b54c62912301b5c1d2a6" Dec 15 06:34:52 crc kubenswrapper[4747]: E1215 06:34:52.378465 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"482b717af81e2f135c28de5bb9111d6762b330c4f3f8b54c62912301b5c1d2a6\": container with ID starting with 482b717af81e2f135c28de5bb9111d6762b330c4f3f8b54c62912301b5c1d2a6 not found: ID does not exist" containerID="482b717af81e2f135c28de5bb9111d6762b330c4f3f8b54c62912301b5c1d2a6" Dec 15 06:34:52 crc kubenswrapper[4747]: I1215 06:34:52.378494 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"482b717af81e2f135c28de5bb9111d6762b330c4f3f8b54c62912301b5c1d2a6"} err="failed to get container status \"482b717af81e2f135c28de5bb9111d6762b330c4f3f8b54c62912301b5c1d2a6\": rpc error: code = NotFound desc = could not find container \"482b717af81e2f135c28de5bb9111d6762b330c4f3f8b54c62912301b5c1d2a6\": container with ID starting with 482b717af81e2f135c28de5bb9111d6762b330c4f3f8b54c62912301b5c1d2a6 not found: ID does not exist" Dec 15 06:34:52 crc kubenswrapper[4747]: I1215 06:34:52.378510 4747 scope.go:117] "RemoveContainer" containerID="3f7be6eca748d69921003a4861334ce13be7b36a4fcabafd709cd21f87efd152" Dec 15 06:34:52 crc kubenswrapper[4747]: E1215 06:34:52.379039 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f7be6eca748d69921003a4861334ce13be7b36a4fcabafd709cd21f87efd152\": container with ID starting with 3f7be6eca748d69921003a4861334ce13be7b36a4fcabafd709cd21f87efd152 not found: ID does not exist" containerID="3f7be6eca748d69921003a4861334ce13be7b36a4fcabafd709cd21f87efd152" Dec 15 06:34:52 crc kubenswrapper[4747]: I1215 06:34:52.379066 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f7be6eca748d69921003a4861334ce13be7b36a4fcabafd709cd21f87efd152"} err="failed to get container status \"3f7be6eca748d69921003a4861334ce13be7b36a4fcabafd709cd21f87efd152\": rpc error: code = NotFound desc = could not find container \"3f7be6eca748d69921003a4861334ce13be7b36a4fcabafd709cd21f87efd152\": container with ID starting with 3f7be6eca748d69921003a4861334ce13be7b36a4fcabafd709cd21f87efd152 not found: ID does not exist" Dec 15 06:34:52 crc kubenswrapper[4747]: I1215 06:34:52.642462 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa8bbb43-a627-46d2-9062-e5ff50ae7d8f" path="/var/lib/kubelet/pods/fa8bbb43-a627-46d2-9062-e5ff50ae7d8f/volumes" Dec 15 06:34:54 crc kubenswrapper[4747]: I1215 06:34:54.085123 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mqgpb"] Dec 15 06:34:54 crc kubenswrapper[4747]: I1215 06:34:54.304892 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mqgpb" podUID="8eb79cb0-8250-4863-b16b-0bf4f055bbc1" containerName="registry-server" containerID="cri-o://fa8df285aa8b0a75f6afc13c6293f48442a0c0110449a0d786b94438fd313c7f" gracePeriod=2 Dec 15 06:34:54 crc kubenswrapper[4747]: I1215 06:34:54.677011 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mqgpb" Dec 15 06:34:54 crc kubenswrapper[4747]: I1215 06:34:54.774637 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jqzx\" (UniqueName: \"kubernetes.io/projected/8eb79cb0-8250-4863-b16b-0bf4f055bbc1-kube-api-access-7jqzx\") pod \"8eb79cb0-8250-4863-b16b-0bf4f055bbc1\" (UID: \"8eb79cb0-8250-4863-b16b-0bf4f055bbc1\") " Dec 15 06:34:54 crc kubenswrapper[4747]: I1215 06:34:54.774716 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eb79cb0-8250-4863-b16b-0bf4f055bbc1-catalog-content\") pod \"8eb79cb0-8250-4863-b16b-0bf4f055bbc1\" (UID: \"8eb79cb0-8250-4863-b16b-0bf4f055bbc1\") " Dec 15 06:34:54 crc kubenswrapper[4747]: I1215 06:34:54.774785 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eb79cb0-8250-4863-b16b-0bf4f055bbc1-utilities\") pod \"8eb79cb0-8250-4863-b16b-0bf4f055bbc1\" (UID: \"8eb79cb0-8250-4863-b16b-0bf4f055bbc1\") " Dec 15 06:34:54 crc kubenswrapper[4747]: I1215 06:34:54.776313 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8eb79cb0-8250-4863-b16b-0bf4f055bbc1-utilities" (OuterVolumeSpecName: "utilities") pod "8eb79cb0-8250-4863-b16b-0bf4f055bbc1" (UID: "8eb79cb0-8250-4863-b16b-0bf4f055bbc1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 06:34:54 crc kubenswrapper[4747]: I1215 06:34:54.785101 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8eb79cb0-8250-4863-b16b-0bf4f055bbc1-kube-api-access-7jqzx" (OuterVolumeSpecName: "kube-api-access-7jqzx") pod "8eb79cb0-8250-4863-b16b-0bf4f055bbc1" (UID: "8eb79cb0-8250-4863-b16b-0bf4f055bbc1"). InnerVolumeSpecName "kube-api-access-7jqzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 06:34:54 crc kubenswrapper[4747]: I1215 06:34:54.827293 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8eb79cb0-8250-4863-b16b-0bf4f055bbc1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8eb79cb0-8250-4863-b16b-0bf4f055bbc1" (UID: "8eb79cb0-8250-4863-b16b-0bf4f055bbc1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 06:34:54 crc kubenswrapper[4747]: I1215 06:34:54.877041 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jqzx\" (UniqueName: \"kubernetes.io/projected/8eb79cb0-8250-4863-b16b-0bf4f055bbc1-kube-api-access-7jqzx\") on node \"crc\" DevicePath \"\"" Dec 15 06:34:54 crc kubenswrapper[4747]: I1215 06:34:54.877076 4747 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eb79cb0-8250-4863-b16b-0bf4f055bbc1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 15 06:34:54 crc kubenswrapper[4747]: I1215 06:34:54.877092 4747 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eb79cb0-8250-4863-b16b-0bf4f055bbc1-utilities\") on node \"crc\" DevicePath \"\"" Dec 15 06:34:55 crc kubenswrapper[4747]: I1215 06:34:55.318196 4747 generic.go:334] "Generic (PLEG): container finished" podID="8eb79cb0-8250-4863-b16b-0bf4f055bbc1" containerID="fa8df285aa8b0a75f6afc13c6293f48442a0c0110449a0d786b94438fd313c7f" exitCode=0 Dec 15 06:34:55 crc kubenswrapper[4747]: I1215 06:34:55.318255 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mqgpb" event={"ID":"8eb79cb0-8250-4863-b16b-0bf4f055bbc1","Type":"ContainerDied","Data":"fa8df285aa8b0a75f6afc13c6293f48442a0c0110449a0d786b94438fd313c7f"} Dec 15 06:34:55 crc kubenswrapper[4747]: I1215 06:34:55.318291 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mqgpb" Dec 15 06:34:55 crc kubenswrapper[4747]: I1215 06:34:55.318323 4747 scope.go:117] "RemoveContainer" containerID="fa8df285aa8b0a75f6afc13c6293f48442a0c0110449a0d786b94438fd313c7f" Dec 15 06:34:55 crc kubenswrapper[4747]: I1215 06:34:55.318302 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mqgpb" event={"ID":"8eb79cb0-8250-4863-b16b-0bf4f055bbc1","Type":"ContainerDied","Data":"5bbd58c4d5a9d5a07c7380f356639bc8a5f707dc213a25a66fef0765fe63b3a9"} Dec 15 06:34:55 crc kubenswrapper[4747]: I1215 06:34:55.336185 4747 scope.go:117] "RemoveContainer" containerID="a32cecaec591afd6245bbca30978b5a29240cb3e4599f99fc9d83246d8210603" Dec 15 06:34:55 crc kubenswrapper[4747]: I1215 06:34:55.376776 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mqgpb"] Dec 15 06:34:55 crc kubenswrapper[4747]: I1215 06:34:55.377121 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mqgpb"] Dec 15 06:34:55 crc kubenswrapper[4747]: I1215 06:34:55.390877 4747 scope.go:117] "RemoveContainer" containerID="17fef5193ca71ca381c2fc28d213be4ea1d8d946f7aa84d28f509ec7d096eae5" Dec 15 06:34:55 crc kubenswrapper[4747]: I1215 06:34:55.409401 4747 scope.go:117] "RemoveContainer" containerID="fa8df285aa8b0a75f6afc13c6293f48442a0c0110449a0d786b94438fd313c7f" Dec 15 06:34:55 crc kubenswrapper[4747]: E1215 06:34:55.409796 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa8df285aa8b0a75f6afc13c6293f48442a0c0110449a0d786b94438fd313c7f\": container with ID starting with fa8df285aa8b0a75f6afc13c6293f48442a0c0110449a0d786b94438fd313c7f not found: ID does not exist" containerID="fa8df285aa8b0a75f6afc13c6293f48442a0c0110449a0d786b94438fd313c7f" Dec 15 06:34:55 crc kubenswrapper[4747]: I1215 06:34:55.409883 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa8df285aa8b0a75f6afc13c6293f48442a0c0110449a0d786b94438fd313c7f"} err="failed to get container status \"fa8df285aa8b0a75f6afc13c6293f48442a0c0110449a0d786b94438fd313c7f\": rpc error: code = NotFound desc = could not find container \"fa8df285aa8b0a75f6afc13c6293f48442a0c0110449a0d786b94438fd313c7f\": container with ID starting with fa8df285aa8b0a75f6afc13c6293f48442a0c0110449a0d786b94438fd313c7f not found: ID does not exist" Dec 15 06:34:55 crc kubenswrapper[4747]: I1215 06:34:55.409981 4747 scope.go:117] "RemoveContainer" containerID="a32cecaec591afd6245bbca30978b5a29240cb3e4599f99fc9d83246d8210603" Dec 15 06:34:55 crc kubenswrapper[4747]: E1215 06:34:55.410450 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a32cecaec591afd6245bbca30978b5a29240cb3e4599f99fc9d83246d8210603\": container with ID starting with a32cecaec591afd6245bbca30978b5a29240cb3e4599f99fc9d83246d8210603 not found: ID does not exist" containerID="a32cecaec591afd6245bbca30978b5a29240cb3e4599f99fc9d83246d8210603" Dec 15 06:34:55 crc kubenswrapper[4747]: I1215 06:34:55.410539 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a32cecaec591afd6245bbca30978b5a29240cb3e4599f99fc9d83246d8210603"} err="failed to get container status \"a32cecaec591afd6245bbca30978b5a29240cb3e4599f99fc9d83246d8210603\": rpc error: code = NotFound desc = could not find container \"a32cecaec591afd6245bbca30978b5a29240cb3e4599f99fc9d83246d8210603\": container with ID starting with a32cecaec591afd6245bbca30978b5a29240cb3e4599f99fc9d83246d8210603 not found: ID does not exist" Dec 15 06:34:55 crc kubenswrapper[4747]: I1215 06:34:55.410600 4747 scope.go:117] "RemoveContainer" containerID="17fef5193ca71ca381c2fc28d213be4ea1d8d946f7aa84d28f509ec7d096eae5" Dec 15 06:34:55 crc kubenswrapper[4747]: E1215 06:34:55.410891 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17fef5193ca71ca381c2fc28d213be4ea1d8d946f7aa84d28f509ec7d096eae5\": container with ID starting with 17fef5193ca71ca381c2fc28d213be4ea1d8d946f7aa84d28f509ec7d096eae5 not found: ID does not exist" containerID="17fef5193ca71ca381c2fc28d213be4ea1d8d946f7aa84d28f509ec7d096eae5" Dec 15 06:34:55 crc kubenswrapper[4747]: I1215 06:34:55.410977 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17fef5193ca71ca381c2fc28d213be4ea1d8d946f7aa84d28f509ec7d096eae5"} err="failed to get container status \"17fef5193ca71ca381c2fc28d213be4ea1d8d946f7aa84d28f509ec7d096eae5\": rpc error: code = NotFound desc = could not find container \"17fef5193ca71ca381c2fc28d213be4ea1d8d946f7aa84d28f509ec7d096eae5\": container with ID starting with 17fef5193ca71ca381c2fc28d213be4ea1d8d946f7aa84d28f509ec7d096eae5 not found: ID does not exist" Dec 15 06:34:56 crc kubenswrapper[4747]: I1215 06:34:56.640334 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8eb79cb0-8250-4863-b16b-0bf4f055bbc1" path="/var/lib/kubelet/pods/8eb79cb0-8250-4863-b16b-0bf4f055bbc1/volumes" Dec 15 06:35:28 crc kubenswrapper[4747]: I1215 06:35:28.637185 4747 generic.go:334] "Generic (PLEG): container finished" podID="c1278277-ea16-4ad4-831a-0fd9a3057178" containerID="a1f8d21d74d7c7c83510229abac6651a8b1242f86d19aba11e6b5cbe6cc377f1" exitCode=0 Dec 15 06:35:28 crc kubenswrapper[4747]: I1215 06:35:28.642099 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-f4tdk/must-gather-m4b67" event={"ID":"c1278277-ea16-4ad4-831a-0fd9a3057178","Type":"ContainerDied","Data":"a1f8d21d74d7c7c83510229abac6651a8b1242f86d19aba11e6b5cbe6cc377f1"} Dec 15 06:35:28 crc kubenswrapper[4747]: I1215 06:35:28.643546 4747 scope.go:117] "RemoveContainer" containerID="a1f8d21d74d7c7c83510229abac6651a8b1242f86d19aba11e6b5cbe6cc377f1" Dec 15 06:35:28 crc kubenswrapper[4747]: I1215 06:35:28.842137 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-f4tdk_must-gather-m4b67_c1278277-ea16-4ad4-831a-0fd9a3057178/gather/0.log" Dec 15 06:35:36 crc kubenswrapper[4747]: I1215 06:35:36.034809 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-f4tdk/must-gather-m4b67"] Dec 15 06:35:36 crc kubenswrapper[4747]: I1215 06:35:36.035615 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-f4tdk/must-gather-m4b67" podUID="c1278277-ea16-4ad4-831a-0fd9a3057178" containerName="copy" containerID="cri-o://9c5c1885a697597925b2ac8d70704c1923241ffd09412569fcd4245aecc08fcb" gracePeriod=2 Dec 15 06:35:36 crc kubenswrapper[4747]: I1215 06:35:36.041875 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-f4tdk/must-gather-m4b67"] Dec 15 06:35:36 crc kubenswrapper[4747]: I1215 06:35:36.410901 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-f4tdk_must-gather-m4b67_c1278277-ea16-4ad4-831a-0fd9a3057178/copy/0.log" Dec 15 06:35:36 crc kubenswrapper[4747]: I1215 06:35:36.411903 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-f4tdk/must-gather-m4b67" Dec 15 06:35:36 crc kubenswrapper[4747]: I1215 06:35:36.460030 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c1278277-ea16-4ad4-831a-0fd9a3057178-must-gather-output\") pod \"c1278277-ea16-4ad4-831a-0fd9a3057178\" (UID: \"c1278277-ea16-4ad4-831a-0fd9a3057178\") " Dec 15 06:35:36 crc kubenswrapper[4747]: I1215 06:35:36.460430 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lclf6\" (UniqueName: \"kubernetes.io/projected/c1278277-ea16-4ad4-831a-0fd9a3057178-kube-api-access-lclf6\") pod \"c1278277-ea16-4ad4-831a-0fd9a3057178\" (UID: \"c1278277-ea16-4ad4-831a-0fd9a3057178\") " Dec 15 06:35:36 crc kubenswrapper[4747]: I1215 06:35:36.466704 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1278277-ea16-4ad4-831a-0fd9a3057178-kube-api-access-lclf6" (OuterVolumeSpecName: "kube-api-access-lclf6") pod "c1278277-ea16-4ad4-831a-0fd9a3057178" (UID: "c1278277-ea16-4ad4-831a-0fd9a3057178"). InnerVolumeSpecName "kube-api-access-lclf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 06:35:36 crc kubenswrapper[4747]: I1215 06:35:36.564216 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lclf6\" (UniqueName: \"kubernetes.io/projected/c1278277-ea16-4ad4-831a-0fd9a3057178-kube-api-access-lclf6\") on node \"crc\" DevicePath \"\"" Dec 15 06:35:36 crc kubenswrapper[4747]: I1215 06:35:36.581037 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1278277-ea16-4ad4-831a-0fd9a3057178-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "c1278277-ea16-4ad4-831a-0fd9a3057178" (UID: "c1278277-ea16-4ad4-831a-0fd9a3057178"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 06:35:36 crc kubenswrapper[4747]: I1215 06:35:36.640815 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1278277-ea16-4ad4-831a-0fd9a3057178" path="/var/lib/kubelet/pods/c1278277-ea16-4ad4-831a-0fd9a3057178/volumes" Dec 15 06:35:36 crc kubenswrapper[4747]: I1215 06:35:36.666242 4747 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c1278277-ea16-4ad4-831a-0fd9a3057178-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 15 06:35:36 crc kubenswrapper[4747]: I1215 06:35:36.720386 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-f4tdk_must-gather-m4b67_c1278277-ea16-4ad4-831a-0fd9a3057178/copy/0.log" Dec 15 06:35:36 crc kubenswrapper[4747]: I1215 06:35:36.720802 4747 generic.go:334] "Generic (PLEG): container finished" podID="c1278277-ea16-4ad4-831a-0fd9a3057178" containerID="9c5c1885a697597925b2ac8d70704c1923241ffd09412569fcd4245aecc08fcb" exitCode=143 Dec 15 06:35:36 crc kubenswrapper[4747]: I1215 06:35:36.720853 4747 scope.go:117] "RemoveContainer" containerID="9c5c1885a697597925b2ac8d70704c1923241ffd09412569fcd4245aecc08fcb" Dec 15 06:35:36 crc kubenswrapper[4747]: I1215 06:35:36.720853 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-f4tdk/must-gather-m4b67" Dec 15 06:35:36 crc kubenswrapper[4747]: I1215 06:35:36.737337 4747 scope.go:117] "RemoveContainer" containerID="a1f8d21d74d7c7c83510229abac6651a8b1242f86d19aba11e6b5cbe6cc377f1" Dec 15 06:35:36 crc kubenswrapper[4747]: I1215 06:35:36.767505 4747 scope.go:117] "RemoveContainer" containerID="9c5c1885a697597925b2ac8d70704c1923241ffd09412569fcd4245aecc08fcb" Dec 15 06:35:36 crc kubenswrapper[4747]: E1215 06:35:36.768043 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c5c1885a697597925b2ac8d70704c1923241ffd09412569fcd4245aecc08fcb\": container with ID starting with 9c5c1885a697597925b2ac8d70704c1923241ffd09412569fcd4245aecc08fcb not found: ID does not exist" containerID="9c5c1885a697597925b2ac8d70704c1923241ffd09412569fcd4245aecc08fcb" Dec 15 06:35:36 crc kubenswrapper[4747]: I1215 06:35:36.768089 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c5c1885a697597925b2ac8d70704c1923241ffd09412569fcd4245aecc08fcb"} err="failed to get container status \"9c5c1885a697597925b2ac8d70704c1923241ffd09412569fcd4245aecc08fcb\": rpc error: code = NotFound desc = could not find container \"9c5c1885a697597925b2ac8d70704c1923241ffd09412569fcd4245aecc08fcb\": container with ID starting with 9c5c1885a697597925b2ac8d70704c1923241ffd09412569fcd4245aecc08fcb not found: ID does not exist" Dec 15 06:35:36 crc kubenswrapper[4747]: I1215 06:35:36.768121 4747 scope.go:117] "RemoveContainer" containerID="a1f8d21d74d7c7c83510229abac6651a8b1242f86d19aba11e6b5cbe6cc377f1" Dec 15 06:35:36 crc kubenswrapper[4747]: E1215 06:35:36.768581 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1f8d21d74d7c7c83510229abac6651a8b1242f86d19aba11e6b5cbe6cc377f1\": container with ID starting with a1f8d21d74d7c7c83510229abac6651a8b1242f86d19aba11e6b5cbe6cc377f1 not found: ID does not exist" containerID="a1f8d21d74d7c7c83510229abac6651a8b1242f86d19aba11e6b5cbe6cc377f1" Dec 15 06:35:36 crc kubenswrapper[4747]: I1215 06:35:36.768607 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1f8d21d74d7c7c83510229abac6651a8b1242f86d19aba11e6b5cbe6cc377f1"} err="failed to get container status \"a1f8d21d74d7c7c83510229abac6651a8b1242f86d19aba11e6b5cbe6cc377f1\": rpc error: code = NotFound desc = could not find container \"a1f8d21d74d7c7c83510229abac6651a8b1242f86d19aba11e6b5cbe6cc377f1\": container with ID starting with a1f8d21d74d7c7c83510229abac6651a8b1242f86d19aba11e6b5cbe6cc377f1 not found: ID does not exist" Dec 15 06:36:58 crc kubenswrapper[4747]: I1215 06:36:58.865588 4747 patch_prober.go:28] interesting pod/machine-config-daemon-nldtn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 15 06:36:58 crc kubenswrapper[4747]: I1215 06:36:58.866218 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 15 06:37:28 crc kubenswrapper[4747]: I1215 06:37:28.864903 4747 patch_prober.go:28] interesting pod/machine-config-daemon-nldtn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 15 06:37:28 crc kubenswrapper[4747]: I1215 06:37:28.865434 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 15 06:37:39 crc kubenswrapper[4747]: I1215 06:37:39.752256 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mxqhb/must-gather-4f456"] Dec 15 06:37:39 crc kubenswrapper[4747]: E1215 06:37:39.753296 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa8bbb43-a627-46d2-9062-e5ff50ae7d8f" containerName="extract-utilities" Dec 15 06:37:39 crc kubenswrapper[4747]: I1215 06:37:39.753316 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa8bbb43-a627-46d2-9062-e5ff50ae7d8f" containerName="extract-utilities" Dec 15 06:37:39 crc kubenswrapper[4747]: E1215 06:37:39.753331 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eb79cb0-8250-4863-b16b-0bf4f055bbc1" containerName="registry-server" Dec 15 06:37:39 crc kubenswrapper[4747]: I1215 06:37:39.753337 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eb79cb0-8250-4863-b16b-0bf4f055bbc1" containerName="registry-server" Dec 15 06:37:39 crc kubenswrapper[4747]: E1215 06:37:39.753348 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa8bbb43-a627-46d2-9062-e5ff50ae7d8f" containerName="extract-content" Dec 15 06:37:39 crc kubenswrapper[4747]: I1215 06:37:39.753355 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa8bbb43-a627-46d2-9062-e5ff50ae7d8f" containerName="extract-content" Dec 15 06:37:39 crc kubenswrapper[4747]: E1215 06:37:39.753370 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa8bbb43-a627-46d2-9062-e5ff50ae7d8f" containerName="registry-server" Dec 15 06:37:39 crc kubenswrapper[4747]: I1215 06:37:39.753374 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa8bbb43-a627-46d2-9062-e5ff50ae7d8f" containerName="registry-server" Dec 15 06:37:39 crc kubenswrapper[4747]: E1215 06:37:39.753383 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1278277-ea16-4ad4-831a-0fd9a3057178" containerName="gather" Dec 15 06:37:39 crc kubenswrapper[4747]: I1215 06:37:39.753387 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1278277-ea16-4ad4-831a-0fd9a3057178" containerName="gather" Dec 15 06:37:39 crc kubenswrapper[4747]: E1215 06:37:39.753414 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eb79cb0-8250-4863-b16b-0bf4f055bbc1" containerName="extract-content" Dec 15 06:37:39 crc kubenswrapper[4747]: I1215 06:37:39.753420 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eb79cb0-8250-4863-b16b-0bf4f055bbc1" containerName="extract-content" Dec 15 06:37:39 crc kubenswrapper[4747]: E1215 06:37:39.753440 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eb79cb0-8250-4863-b16b-0bf4f055bbc1" containerName="extract-utilities" Dec 15 06:37:39 crc kubenswrapper[4747]: I1215 06:37:39.753446 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eb79cb0-8250-4863-b16b-0bf4f055bbc1" containerName="extract-utilities" Dec 15 06:37:39 crc kubenswrapper[4747]: E1215 06:37:39.753453 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1278277-ea16-4ad4-831a-0fd9a3057178" containerName="copy" Dec 15 06:37:39 crc kubenswrapper[4747]: I1215 06:37:39.753459 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1278277-ea16-4ad4-831a-0fd9a3057178" containerName="copy" Dec 15 06:37:39 crc kubenswrapper[4747]: I1215 06:37:39.753722 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eb79cb0-8250-4863-b16b-0bf4f055bbc1" containerName="registry-server" Dec 15 06:37:39 crc kubenswrapper[4747]: I1215 06:37:39.753743 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa8bbb43-a627-46d2-9062-e5ff50ae7d8f" containerName="registry-server" Dec 15 06:37:39 crc kubenswrapper[4747]: I1215 06:37:39.753755 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1278277-ea16-4ad4-831a-0fd9a3057178" containerName="gather" Dec 15 06:37:39 crc kubenswrapper[4747]: I1215 06:37:39.753773 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1278277-ea16-4ad4-831a-0fd9a3057178" containerName="copy" Dec 15 06:37:39 crc kubenswrapper[4747]: I1215 06:37:39.754878 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mxqhb/must-gather-4f456" Dec 15 06:37:39 crc kubenswrapper[4747]: I1215 06:37:39.757179 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-mxqhb"/"kube-root-ca.crt" Dec 15 06:37:39 crc kubenswrapper[4747]: I1215 06:37:39.757387 4747 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-mxqhb"/"default-dockercfg-xkhdn" Dec 15 06:37:39 crc kubenswrapper[4747]: I1215 06:37:39.758072 4747 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-mxqhb"/"openshift-service-ca.crt" Dec 15 06:37:39 crc kubenswrapper[4747]: I1215 06:37:39.759840 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-mxqhb/must-gather-4f456"] Dec 15 06:37:39 crc kubenswrapper[4747]: I1215 06:37:39.819850 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d8ac7c8f-2b53-4e95-ba89-b67b7e73ecfa-must-gather-output\") pod \"must-gather-4f456\" (UID: \"d8ac7c8f-2b53-4e95-ba89-b67b7e73ecfa\") " pod="openshift-must-gather-mxqhb/must-gather-4f456" Dec 15 06:37:39 crc kubenswrapper[4747]: I1215 06:37:39.820151 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8wqk\" (UniqueName: \"kubernetes.io/projected/d8ac7c8f-2b53-4e95-ba89-b67b7e73ecfa-kube-api-access-k8wqk\") pod \"must-gather-4f456\" (UID: \"d8ac7c8f-2b53-4e95-ba89-b67b7e73ecfa\") " pod="openshift-must-gather-mxqhb/must-gather-4f456" Dec 15 06:37:39 crc kubenswrapper[4747]: I1215 06:37:39.921858 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8wqk\" (UniqueName: \"kubernetes.io/projected/d8ac7c8f-2b53-4e95-ba89-b67b7e73ecfa-kube-api-access-k8wqk\") pod \"must-gather-4f456\" (UID: \"d8ac7c8f-2b53-4e95-ba89-b67b7e73ecfa\") " pod="openshift-must-gather-mxqhb/must-gather-4f456" Dec 15 06:37:39 crc kubenswrapper[4747]: I1215 06:37:39.922042 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d8ac7c8f-2b53-4e95-ba89-b67b7e73ecfa-must-gather-output\") pod \"must-gather-4f456\" (UID: \"d8ac7c8f-2b53-4e95-ba89-b67b7e73ecfa\") " pod="openshift-must-gather-mxqhb/must-gather-4f456" Dec 15 06:37:39 crc kubenswrapper[4747]: I1215 06:37:39.922482 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d8ac7c8f-2b53-4e95-ba89-b67b7e73ecfa-must-gather-output\") pod \"must-gather-4f456\" (UID: \"d8ac7c8f-2b53-4e95-ba89-b67b7e73ecfa\") " pod="openshift-must-gather-mxqhb/must-gather-4f456" Dec 15 06:37:39 crc kubenswrapper[4747]: I1215 06:37:39.938243 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8wqk\" (UniqueName: \"kubernetes.io/projected/d8ac7c8f-2b53-4e95-ba89-b67b7e73ecfa-kube-api-access-k8wqk\") pod \"must-gather-4f456\" (UID: \"d8ac7c8f-2b53-4e95-ba89-b67b7e73ecfa\") " pod="openshift-must-gather-mxqhb/must-gather-4f456" Dec 15 06:37:40 crc kubenswrapper[4747]: I1215 06:37:40.072902 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mxqhb/must-gather-4f456" Dec 15 06:37:40 crc kubenswrapper[4747]: I1215 06:37:40.492606 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-mxqhb/must-gather-4f456"] Dec 15 06:37:40 crc kubenswrapper[4747]: I1215 06:37:40.876304 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mxqhb/must-gather-4f456" event={"ID":"d8ac7c8f-2b53-4e95-ba89-b67b7e73ecfa","Type":"ContainerStarted","Data":"c13f33e3f0cc3b6f5f11fb439e639ee0570f0ed2b24536cc05d52cf68e89a5a3"} Dec 15 06:37:40 crc kubenswrapper[4747]: I1215 06:37:40.876540 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mxqhb/must-gather-4f456" event={"ID":"d8ac7c8f-2b53-4e95-ba89-b67b7e73ecfa","Type":"ContainerStarted","Data":"8d26aafbb19fb67c04310553a135163cf81a3300c7cfee97b5f531ccf9c60879"} Dec 15 06:37:41 crc kubenswrapper[4747]: I1215 06:37:41.886555 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mxqhb/must-gather-4f456" event={"ID":"d8ac7c8f-2b53-4e95-ba89-b67b7e73ecfa","Type":"ContainerStarted","Data":"7450b811132c4062a72595939955387c34a6207ea0bc870ed9b7bdb03414c9a6"} Dec 15 06:37:41 crc kubenswrapper[4747]: I1215 06:37:41.905992 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-mxqhb/must-gather-4f456" podStartSLOduration=2.905515792 podStartE2EDuration="2.905515792s" podCreationTimestamp="2025-12-15 06:37:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 06:37:41.898203133 +0000 UTC m=+3625.594715050" watchObservedRunningTime="2025-12-15 06:37:41.905515792 +0000 UTC m=+3625.602027708" Dec 15 06:37:43 crc kubenswrapper[4747]: I1215 06:37:43.803752 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mxqhb/crc-debug-gjrhn"] Dec 15 06:37:43 crc kubenswrapper[4747]: I1215 06:37:43.806140 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mxqhb/crc-debug-gjrhn" Dec 15 06:37:43 crc kubenswrapper[4747]: I1215 06:37:43.816197 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0178ddc0-c962-4f4f-ab72-d8f0845a3fb8-host\") pod \"crc-debug-gjrhn\" (UID: \"0178ddc0-c962-4f4f-ab72-d8f0845a3fb8\") " pod="openshift-must-gather-mxqhb/crc-debug-gjrhn" Dec 15 06:37:43 crc kubenswrapper[4747]: I1215 06:37:43.816250 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p48vz\" (UniqueName: \"kubernetes.io/projected/0178ddc0-c962-4f4f-ab72-d8f0845a3fb8-kube-api-access-p48vz\") pod \"crc-debug-gjrhn\" (UID: \"0178ddc0-c962-4f4f-ab72-d8f0845a3fb8\") " pod="openshift-must-gather-mxqhb/crc-debug-gjrhn" Dec 15 06:37:43 crc kubenswrapper[4747]: I1215 06:37:43.918341 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0178ddc0-c962-4f4f-ab72-d8f0845a3fb8-host\") pod \"crc-debug-gjrhn\" (UID: \"0178ddc0-c962-4f4f-ab72-d8f0845a3fb8\") " pod="openshift-must-gather-mxqhb/crc-debug-gjrhn" Dec 15 06:37:43 crc kubenswrapper[4747]: I1215 06:37:43.918420 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p48vz\" (UniqueName: \"kubernetes.io/projected/0178ddc0-c962-4f4f-ab72-d8f0845a3fb8-kube-api-access-p48vz\") pod \"crc-debug-gjrhn\" (UID: \"0178ddc0-c962-4f4f-ab72-d8f0845a3fb8\") " pod="openshift-must-gather-mxqhb/crc-debug-gjrhn" Dec 15 06:37:43 crc kubenswrapper[4747]: I1215 06:37:43.918474 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0178ddc0-c962-4f4f-ab72-d8f0845a3fb8-host\") pod \"crc-debug-gjrhn\" (UID: \"0178ddc0-c962-4f4f-ab72-d8f0845a3fb8\") " pod="openshift-must-gather-mxqhb/crc-debug-gjrhn" Dec 15 06:37:43 crc kubenswrapper[4747]: I1215 06:37:43.937479 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p48vz\" (UniqueName: \"kubernetes.io/projected/0178ddc0-c962-4f4f-ab72-d8f0845a3fb8-kube-api-access-p48vz\") pod \"crc-debug-gjrhn\" (UID: \"0178ddc0-c962-4f4f-ab72-d8f0845a3fb8\") " pod="openshift-must-gather-mxqhb/crc-debug-gjrhn" Dec 15 06:37:44 crc kubenswrapper[4747]: I1215 06:37:44.128885 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mxqhb/crc-debug-gjrhn" Dec 15 06:37:44 crc kubenswrapper[4747]: W1215 06:37:44.156329 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0178ddc0_c962_4f4f_ab72_d8f0845a3fb8.slice/crio-91661a7b3912100bb38e780c00a7b22b64576e8ab149b5c66681b3dfd6a36bfa WatchSource:0}: Error finding container 91661a7b3912100bb38e780c00a7b22b64576e8ab149b5c66681b3dfd6a36bfa: Status 404 returned error can't find the container with id 91661a7b3912100bb38e780c00a7b22b64576e8ab149b5c66681b3dfd6a36bfa Dec 15 06:37:44 crc kubenswrapper[4747]: I1215 06:37:44.912311 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mxqhb/crc-debug-gjrhn" event={"ID":"0178ddc0-c962-4f4f-ab72-d8f0845a3fb8","Type":"ContainerStarted","Data":"b15127997d621f319ac17b9eaae2040c2904bf99dc24b74547325b1566cb3ab6"} Dec 15 06:37:44 crc kubenswrapper[4747]: I1215 06:37:44.912889 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mxqhb/crc-debug-gjrhn" event={"ID":"0178ddc0-c962-4f4f-ab72-d8f0845a3fb8","Type":"ContainerStarted","Data":"91661a7b3912100bb38e780c00a7b22b64576e8ab149b5c66681b3dfd6a36bfa"} Dec 15 06:37:44 crc kubenswrapper[4747]: I1215 06:37:44.929464 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-mxqhb/crc-debug-gjrhn" podStartSLOduration=1.929446561 podStartE2EDuration="1.929446561s" podCreationTimestamp="2025-12-15 06:37:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-15 06:37:44.924472078 +0000 UTC m=+3628.620983986" watchObservedRunningTime="2025-12-15 06:37:44.929446561 +0000 UTC m=+3628.625958478" Dec 15 06:37:54 crc kubenswrapper[4747]: I1215 06:37:54.997238 4747 generic.go:334] "Generic (PLEG): container finished" podID="0178ddc0-c962-4f4f-ab72-d8f0845a3fb8" containerID="b15127997d621f319ac17b9eaae2040c2904bf99dc24b74547325b1566cb3ab6" exitCode=0 Dec 15 06:37:54 crc kubenswrapper[4747]: I1215 06:37:54.997320 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mxqhb/crc-debug-gjrhn" event={"ID":"0178ddc0-c962-4f4f-ab72-d8f0845a3fb8","Type":"ContainerDied","Data":"b15127997d621f319ac17b9eaae2040c2904bf99dc24b74547325b1566cb3ab6"} Dec 15 06:37:56 crc kubenswrapper[4747]: I1215 06:37:56.089882 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mxqhb/crc-debug-gjrhn" Dec 15 06:37:56 crc kubenswrapper[4747]: I1215 06:37:56.118041 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mxqhb/crc-debug-gjrhn"] Dec 15 06:37:56 crc kubenswrapper[4747]: I1215 06:37:56.125588 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mxqhb/crc-debug-gjrhn"] Dec 15 06:37:56 crc kubenswrapper[4747]: I1215 06:37:56.155549 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p48vz\" (UniqueName: \"kubernetes.io/projected/0178ddc0-c962-4f4f-ab72-d8f0845a3fb8-kube-api-access-p48vz\") pod \"0178ddc0-c962-4f4f-ab72-d8f0845a3fb8\" (UID: \"0178ddc0-c962-4f4f-ab72-d8f0845a3fb8\") " Dec 15 06:37:56 crc kubenswrapper[4747]: I1215 06:37:56.155642 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0178ddc0-c962-4f4f-ab72-d8f0845a3fb8-host\") pod \"0178ddc0-c962-4f4f-ab72-d8f0845a3fb8\" (UID: \"0178ddc0-c962-4f4f-ab72-d8f0845a3fb8\") " Dec 15 06:37:56 crc kubenswrapper[4747]: I1215 06:37:56.155766 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0178ddc0-c962-4f4f-ab72-d8f0845a3fb8-host" (OuterVolumeSpecName: "host") pod "0178ddc0-c962-4f4f-ab72-d8f0845a3fb8" (UID: "0178ddc0-c962-4f4f-ab72-d8f0845a3fb8"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 15 06:37:56 crc kubenswrapper[4747]: I1215 06:37:56.156765 4747 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0178ddc0-c962-4f4f-ab72-d8f0845a3fb8-host\") on node \"crc\" DevicePath \"\"" Dec 15 06:37:56 crc kubenswrapper[4747]: I1215 06:37:56.163915 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0178ddc0-c962-4f4f-ab72-d8f0845a3fb8-kube-api-access-p48vz" (OuterVolumeSpecName: "kube-api-access-p48vz") pod "0178ddc0-c962-4f4f-ab72-d8f0845a3fb8" (UID: "0178ddc0-c962-4f4f-ab72-d8f0845a3fb8"). InnerVolumeSpecName "kube-api-access-p48vz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 06:37:56 crc kubenswrapper[4747]: I1215 06:37:56.258595 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p48vz\" (UniqueName: \"kubernetes.io/projected/0178ddc0-c962-4f4f-ab72-d8f0845a3fb8-kube-api-access-p48vz\") on node \"crc\" DevicePath \"\"" Dec 15 06:37:56 crc kubenswrapper[4747]: I1215 06:37:56.639508 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0178ddc0-c962-4f4f-ab72-d8f0845a3fb8" path="/var/lib/kubelet/pods/0178ddc0-c962-4f4f-ab72-d8f0845a3fb8/volumes" Dec 15 06:37:57 crc kubenswrapper[4747]: I1215 06:37:57.015895 4747 scope.go:117] "RemoveContainer" containerID="b15127997d621f319ac17b9eaae2040c2904bf99dc24b74547325b1566cb3ab6" Dec 15 06:37:57 crc kubenswrapper[4747]: I1215 06:37:57.015998 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mxqhb/crc-debug-gjrhn" Dec 15 06:37:57 crc kubenswrapper[4747]: I1215 06:37:57.326300 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mxqhb/crc-debug-xtwdm"] Dec 15 06:37:57 crc kubenswrapper[4747]: E1215 06:37:57.327855 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0178ddc0-c962-4f4f-ab72-d8f0845a3fb8" containerName="container-00" Dec 15 06:37:57 crc kubenswrapper[4747]: I1215 06:37:57.327961 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="0178ddc0-c962-4f4f-ab72-d8f0845a3fb8" containerName="container-00" Dec 15 06:37:57 crc kubenswrapper[4747]: I1215 06:37:57.328380 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="0178ddc0-c962-4f4f-ab72-d8f0845a3fb8" containerName="container-00" Dec 15 06:37:57 crc kubenswrapper[4747]: I1215 06:37:57.329379 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mxqhb/crc-debug-xtwdm" Dec 15 06:37:57 crc kubenswrapper[4747]: I1215 06:37:57.486434 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hz7f\" (UniqueName: \"kubernetes.io/projected/0d446070-5a65-48d3-9f93-a24d589dc62e-kube-api-access-2hz7f\") pod \"crc-debug-xtwdm\" (UID: \"0d446070-5a65-48d3-9f93-a24d589dc62e\") " pod="openshift-must-gather-mxqhb/crc-debug-xtwdm" Dec 15 06:37:57 crc kubenswrapper[4747]: I1215 06:37:57.486788 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0d446070-5a65-48d3-9f93-a24d589dc62e-host\") pod \"crc-debug-xtwdm\" (UID: \"0d446070-5a65-48d3-9f93-a24d589dc62e\") " pod="openshift-must-gather-mxqhb/crc-debug-xtwdm" Dec 15 06:37:57 crc kubenswrapper[4747]: I1215 06:37:57.589327 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hz7f\" (UniqueName: \"kubernetes.io/projected/0d446070-5a65-48d3-9f93-a24d589dc62e-kube-api-access-2hz7f\") pod \"crc-debug-xtwdm\" (UID: \"0d446070-5a65-48d3-9f93-a24d589dc62e\") " pod="openshift-must-gather-mxqhb/crc-debug-xtwdm" Dec 15 06:37:57 crc kubenswrapper[4747]: I1215 06:37:57.589423 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0d446070-5a65-48d3-9f93-a24d589dc62e-host\") pod \"crc-debug-xtwdm\" (UID: \"0d446070-5a65-48d3-9f93-a24d589dc62e\") " pod="openshift-must-gather-mxqhb/crc-debug-xtwdm" Dec 15 06:37:57 crc kubenswrapper[4747]: I1215 06:37:57.589695 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0d446070-5a65-48d3-9f93-a24d589dc62e-host\") pod \"crc-debug-xtwdm\" (UID: \"0d446070-5a65-48d3-9f93-a24d589dc62e\") " pod="openshift-must-gather-mxqhb/crc-debug-xtwdm" Dec 15 06:37:57 crc kubenswrapper[4747]: I1215 06:37:57.612279 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hz7f\" (UniqueName: \"kubernetes.io/projected/0d446070-5a65-48d3-9f93-a24d589dc62e-kube-api-access-2hz7f\") pod \"crc-debug-xtwdm\" (UID: \"0d446070-5a65-48d3-9f93-a24d589dc62e\") " pod="openshift-must-gather-mxqhb/crc-debug-xtwdm" Dec 15 06:37:57 crc kubenswrapper[4747]: I1215 06:37:57.645350 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mxqhb/crc-debug-xtwdm" Dec 15 06:37:57 crc kubenswrapper[4747]: W1215 06:37:57.669962 4747 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d446070_5a65_48d3_9f93_a24d589dc62e.slice/crio-0741d9bf8f8824649bd152b2ccf85b43804fb85f501b0c9cd0c8894c4a9e1efd WatchSource:0}: Error finding container 0741d9bf8f8824649bd152b2ccf85b43804fb85f501b0c9cd0c8894c4a9e1efd: Status 404 returned error can't find the container with id 0741d9bf8f8824649bd152b2ccf85b43804fb85f501b0c9cd0c8894c4a9e1efd Dec 15 06:37:58 crc kubenswrapper[4747]: I1215 06:37:58.026729 4747 generic.go:334] "Generic (PLEG): container finished" podID="0d446070-5a65-48d3-9f93-a24d589dc62e" containerID="9dd89d80d30e94a80a9d398c7109335ba563f9c701729039a2e79c084d304fe7" exitCode=1 Dec 15 06:37:58 crc kubenswrapper[4747]: I1215 06:37:58.026801 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mxqhb/crc-debug-xtwdm" event={"ID":"0d446070-5a65-48d3-9f93-a24d589dc62e","Type":"ContainerDied","Data":"9dd89d80d30e94a80a9d398c7109335ba563f9c701729039a2e79c084d304fe7"} Dec 15 06:37:58 crc kubenswrapper[4747]: I1215 06:37:58.027000 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mxqhb/crc-debug-xtwdm" event={"ID":"0d446070-5a65-48d3-9f93-a24d589dc62e","Type":"ContainerStarted","Data":"0741d9bf8f8824649bd152b2ccf85b43804fb85f501b0c9cd0c8894c4a9e1efd"} Dec 15 06:37:58 crc kubenswrapper[4747]: I1215 06:37:58.056373 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mxqhb/crc-debug-xtwdm"] Dec 15 06:37:58 crc kubenswrapper[4747]: I1215 06:37:58.070090 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mxqhb/crc-debug-xtwdm"] Dec 15 06:37:58 crc kubenswrapper[4747]: I1215 06:37:58.865562 4747 patch_prober.go:28] interesting pod/machine-config-daemon-nldtn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 15 06:37:58 crc kubenswrapper[4747]: I1215 06:37:58.865906 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 15 06:37:58 crc kubenswrapper[4747]: I1215 06:37:58.865979 4747 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" Dec 15 06:37:58 crc kubenswrapper[4747]: I1215 06:37:58.866729 4747 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e21c8b075436a10f5cd74cbf56c983328815d2f43d9184bb57bbdc7f8ffd76c1"} pod="openshift-machine-config-operator/machine-config-daemon-nldtn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 15 06:37:58 crc kubenswrapper[4747]: I1215 06:37:58.866788 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" containerID="cri-o://e21c8b075436a10f5cd74cbf56c983328815d2f43d9184bb57bbdc7f8ffd76c1" gracePeriod=600 Dec 15 06:37:59 crc kubenswrapper[4747]: I1215 06:37:59.044512 4747 generic.go:334] "Generic (PLEG): container finished" podID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerID="e21c8b075436a10f5cd74cbf56c983328815d2f43d9184bb57bbdc7f8ffd76c1" exitCode=0 Dec 15 06:37:59 crc kubenswrapper[4747]: I1215 06:37:59.044601 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" event={"ID":"1d50e5c9-7ce9-40c0-b942-01031654d27c","Type":"ContainerDied","Data":"e21c8b075436a10f5cd74cbf56c983328815d2f43d9184bb57bbdc7f8ffd76c1"} Dec 15 06:37:59 crc kubenswrapper[4747]: I1215 06:37:59.044662 4747 scope.go:117] "RemoveContainer" containerID="44cd12cf9c71b73bd3a2782f4e6fd9beb9278b65a8ace12d37e33f7b4a372d97" Dec 15 06:37:59 crc kubenswrapper[4747]: I1215 06:37:59.125702 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mxqhb/crc-debug-xtwdm" Dec 15 06:37:59 crc kubenswrapper[4747]: I1215 06:37:59.327051 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hz7f\" (UniqueName: \"kubernetes.io/projected/0d446070-5a65-48d3-9f93-a24d589dc62e-kube-api-access-2hz7f\") pod \"0d446070-5a65-48d3-9f93-a24d589dc62e\" (UID: \"0d446070-5a65-48d3-9f93-a24d589dc62e\") " Dec 15 06:37:59 crc kubenswrapper[4747]: I1215 06:37:59.327351 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0d446070-5a65-48d3-9f93-a24d589dc62e-host\") pod \"0d446070-5a65-48d3-9f93-a24d589dc62e\" (UID: \"0d446070-5a65-48d3-9f93-a24d589dc62e\") " Dec 15 06:37:59 crc kubenswrapper[4747]: I1215 06:37:59.327950 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d446070-5a65-48d3-9f93-a24d589dc62e-host" (OuterVolumeSpecName: "host") pod "0d446070-5a65-48d3-9f93-a24d589dc62e" (UID: "0d446070-5a65-48d3-9f93-a24d589dc62e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 15 06:37:59 crc kubenswrapper[4747]: I1215 06:37:59.342802 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d446070-5a65-48d3-9f93-a24d589dc62e-kube-api-access-2hz7f" (OuterVolumeSpecName: "kube-api-access-2hz7f") pod "0d446070-5a65-48d3-9f93-a24d589dc62e" (UID: "0d446070-5a65-48d3-9f93-a24d589dc62e"). InnerVolumeSpecName "kube-api-access-2hz7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 06:37:59 crc kubenswrapper[4747]: I1215 06:37:59.430216 4747 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0d446070-5a65-48d3-9f93-a24d589dc62e-host\") on node \"crc\" DevicePath \"\"" Dec 15 06:37:59 crc kubenswrapper[4747]: I1215 06:37:59.430443 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hz7f\" (UniqueName: \"kubernetes.io/projected/0d446070-5a65-48d3-9f93-a24d589dc62e-kube-api-access-2hz7f\") on node \"crc\" DevicePath \"\"" Dec 15 06:38:00 crc kubenswrapper[4747]: I1215 06:38:00.059668 4747 scope.go:117] "RemoveContainer" containerID="9dd89d80d30e94a80a9d398c7109335ba563f9c701729039a2e79c084d304fe7" Dec 15 06:38:00 crc kubenswrapper[4747]: I1215 06:38:00.059735 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mxqhb/crc-debug-xtwdm" Dec 15 06:38:00 crc kubenswrapper[4747]: I1215 06:38:00.064947 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" event={"ID":"1d50e5c9-7ce9-40c0-b942-01031654d27c","Type":"ContainerStarted","Data":"04957b75e1647a0c1079a2918e34078c34fa86cbc8c0b1900884016cbe7c9c82"} Dec 15 06:38:00 crc kubenswrapper[4747]: I1215 06:38:00.641832 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d446070-5a65-48d3-9f93-a24d589dc62e" path="/var/lib/kubelet/pods/0d446070-5a65-48d3-9f93-a24d589dc62e/volumes" Dec 15 06:38:34 crc kubenswrapper[4747]: I1215 06:38:34.180556 4747 scope.go:117] "RemoveContainer" containerID="333ad0238c44dd77c37c551997aadbefe0461c2fe77a2fbd890ff09e2490ec7a" Dec 15 06:38:36 crc kubenswrapper[4747]: I1215 06:38:36.279722 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-79fcd98c9d-ccgjm_ecaeb0c4-ae67-4901-bc77-863b3a8c5c03/barbican-api/0.log" Dec 15 06:38:36 crc kubenswrapper[4747]: I1215 06:38:36.428880 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-79fcd98c9d-ccgjm_ecaeb0c4-ae67-4901-bc77-863b3a8c5c03/barbican-api-log/0.log" Dec 15 06:38:36 crc kubenswrapper[4747]: I1215 06:38:36.457074 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-fb9798db-mhqvv_5adecd4c-fd5a-4186-866f-2de0e4f9a859/barbican-keystone-listener/0.log" Dec 15 06:38:36 crc kubenswrapper[4747]: I1215 06:38:36.543411 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-fb9798db-mhqvv_5adecd4c-fd5a-4186-866f-2de0e4f9a859/barbican-keystone-listener-log/0.log" Dec 15 06:38:36 crc kubenswrapper[4747]: I1215 06:38:36.625738 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-9b996c647-vsbr7_fc262319-2445-42a7-9fb4-46f640216e00/barbican-worker/0.log" Dec 15 06:38:36 crc kubenswrapper[4747]: I1215 06:38:36.687218 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-9b996c647-vsbr7_fc262319-2445-42a7-9fb4-46f640216e00/barbican-worker-log/0.log" Dec 15 06:38:36 crc kubenswrapper[4747]: I1215 06:38:36.811602 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-xl6qh_2af42599-0cda-45de-b1fe-9bed5ad6f035/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 15 06:38:36 crc kubenswrapper[4747]: I1215 06:38:36.910258 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_34eac981-4ed2-4654-b4b0-f52ac5c7aeda/ceilometer-central-agent/0.log" Dec 15 06:38:37 crc kubenswrapper[4747]: I1215 06:38:37.007469 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_34eac981-4ed2-4654-b4b0-f52ac5c7aeda/proxy-httpd/0.log" Dec 15 06:38:37 crc kubenswrapper[4747]: I1215 06:38:37.042702 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_34eac981-4ed2-4654-b4b0-f52ac5c7aeda/ceilometer-notification-agent/0.log" Dec 15 06:38:37 crc kubenswrapper[4747]: I1215 06:38:37.093461 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_34eac981-4ed2-4654-b4b0-f52ac5c7aeda/sg-core/0.log" Dec 15 06:38:37 crc kubenswrapper[4747]: I1215 06:38:37.218525 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f66857a8-55e6-4e4f-ba1c-23bc5afec36b/cinder-api-log/0.log" Dec 15 06:38:37 crc kubenswrapper[4747]: I1215 06:38:37.342949 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_de848267-fecb-4856-98c8-e81c3cfbb156/cinder-scheduler/0.log" Dec 15 06:38:37 crc kubenswrapper[4747]: I1215 06:38:37.474041 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_de848267-fecb-4856-98c8-e81c3cfbb156/probe/0.log" Dec 15 06:38:37 crc kubenswrapper[4747]: I1215 06:38:37.481284 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f66857a8-55e6-4e4f-ba1c-23bc5afec36b/cinder-api/0.log" Dec 15 06:38:37 crc kubenswrapper[4747]: I1215 06:38:37.570372 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-dj5jz_163accf5-f1cd-48a4-93e3-4c7e6172470e/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 15 06:38:37 crc kubenswrapper[4747]: I1215 06:38:37.705531 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-594cx_23f33913-7e72-4eee-bd81-3561906af7fb/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 15 06:38:37 crc kubenswrapper[4747]: I1215 06:38:37.782696 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-775bb8f95f-twm2m_ad0490f9-1430-4511-b8cc-139a6c656b48/init/0.log" Dec 15 06:38:37 crc kubenswrapper[4747]: I1215 06:38:37.912427 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-775bb8f95f-twm2m_ad0490f9-1430-4511-b8cc-139a6c656b48/init/0.log" Dec 15 06:38:37 crc kubenswrapper[4747]: I1215 06:38:37.977575 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-775bb8f95f-twm2m_ad0490f9-1430-4511-b8cc-139a6c656b48/dnsmasq-dns/0.log" Dec 15 06:38:37 crc kubenswrapper[4747]: I1215 06:38:37.998945 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-tx8kh_e1073c0b-63fe-4562-bc3c-953bd3697022/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 15 06:38:38 crc kubenswrapper[4747]: I1215 06:38:38.335338 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_32596d05-cc4c-41f3-87b0-a69ff49aba9d/glance-httpd/0.log" Dec 15 06:38:38 crc kubenswrapper[4747]: I1215 06:38:38.354026 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_32596d05-cc4c-41f3-87b0-a69ff49aba9d/glance-log/0.log" Dec 15 06:38:38 crc kubenswrapper[4747]: I1215 06:38:38.488332 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_89218c2b-2e98-43cc-a4b4-3e741773bfb8/glance-httpd/0.log" Dec 15 06:38:38 crc kubenswrapper[4747]: I1215 06:38:38.532500 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_89218c2b-2e98-43cc-a4b4-3e741773bfb8/glance-log/0.log" Dec 15 06:38:38 crc kubenswrapper[4747]: I1215 06:38:38.620842 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-tftvb_4d6d2d2a-b9f8-4b3f-b71f-f2cb46f77826/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Dec 15 06:38:38 crc kubenswrapper[4747]: I1215 06:38:38.798716 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-kp4bl_589b27c2-c1d7-423e-b324-10ebc183f51d/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 15 06:38:39 crc kubenswrapper[4747]: I1215 06:38:39.018235 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29429641-58jbc_e52cba4b-1373-4ebc-8e01-f0cb86d099ea/keystone-cron/0.log" Dec 15 06:38:39 crc kubenswrapper[4747]: I1215 06:38:39.155521 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_39eb3298-a864-45a5-b1a1-df263390967d/kube-state-metrics/0.log" Dec 15 06:38:39 crc kubenswrapper[4747]: I1215 06:38:39.192633 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6b5fccc9fc-25v6s_3f0cf723-d247-4d37-95f2-2ba1318f3e27/keystone-api/0.log" Dec 15 06:38:39 crc kubenswrapper[4747]: I1215 06:38:39.305029 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-rdwlk_56848ff3-1ce9-42b3-be44-5b8d4280c9a1/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Dec 15 06:38:39 crc kubenswrapper[4747]: I1215 06:38:39.528159 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7b9f9565dc-vlcmk_8e5a51e1-29fc-4fe7-b6d4-5a3227a93ec7/neutron-httpd/0.log" Dec 15 06:38:39 crc kubenswrapper[4747]: I1215 06:38:39.608609 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_8828a0c4-9d91-45ba-a6f7-3bd720a9596b/memcached/0.log" Dec 15 06:38:39 crc kubenswrapper[4747]: I1215 06:38:39.623042 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7b9f9565dc-vlcmk_8e5a51e1-29fc-4fe7-b6d4-5a3227a93ec7/neutron-api/0.log" Dec 15 06:38:39 crc kubenswrapper[4747]: I1215 06:38:39.699583 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-nm52b_cd2cacaf-52f3-4fc9-8cba-11e9536bdfa4/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Dec 15 06:38:40 crc kubenswrapper[4747]: I1215 06:38:40.101421 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_7d859c86-54f1-459b-82a5-1ed6739f42f9/nova-api-log/0.log" Dec 15 06:38:40 crc kubenswrapper[4747]: I1215 06:38:40.182354 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_ead9df5f-294c-464e-b416-743ad9245464/nova-cell0-conductor-conductor/0.log" Dec 15 06:38:40 crc kubenswrapper[4747]: I1215 06:38:40.241347 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_8472a77c-3c9b-4fa1-9572-cc21f9c2b814/nova-cell1-conductor-conductor/0.log" Dec 15 06:38:40 crc kubenswrapper[4747]: I1215 06:38:40.456562 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-v7cvg_6a04d0c3-49fa-44ad-ab27-08ba583d1142/nova-edpm-deployment-openstack-edpm-ipam/0.log" Dec 15 06:38:40 crc kubenswrapper[4747]: I1215 06:38:40.482187 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_51032daa-0c9c-4794-9422-2ea37212e21e/nova-cell1-novncproxy-novncproxy/0.log" Dec 15 06:38:40 crc kubenswrapper[4747]: I1215 06:38:40.509480 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_7d859c86-54f1-459b-82a5-1ed6739f42f9/nova-api-api/0.log" Dec 15 06:38:40 crc kubenswrapper[4747]: I1215 06:38:40.619036 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_e45a17a8-29f1-40e2-96ae-f2db0b32407e/nova-metadata-log/0.log" Dec 15 06:38:40 crc kubenswrapper[4747]: I1215 06:38:40.839696 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_22da0dca-a59a-40f7-8dd2-95305eea5ee0/mysql-bootstrap/0.log" Dec 15 06:38:40 crc kubenswrapper[4747]: I1215 06:38:40.934847 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_13aa9deb-71b7-4adf-858c-89c461427547/nova-scheduler-scheduler/0.log" Dec 15 06:38:40 crc kubenswrapper[4747]: I1215 06:38:40.999042 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_22da0dca-a59a-40f7-8dd2-95305eea5ee0/mysql-bootstrap/0.log" Dec 15 06:38:41 crc kubenswrapper[4747]: I1215 06:38:41.181985 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f242c5ef-84fc-4437-86a0-0175e8ea123b/mysql-bootstrap/0.log" Dec 15 06:38:41 crc kubenswrapper[4747]: I1215 06:38:41.188375 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_22da0dca-a59a-40f7-8dd2-95305eea5ee0/galera/0.log" Dec 15 06:38:41 crc kubenswrapper[4747]: I1215 06:38:41.349543 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f242c5ef-84fc-4437-86a0-0175e8ea123b/galera/0.log" Dec 15 06:38:41 crc kubenswrapper[4747]: I1215 06:38:41.372789 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_03ee9ab5-c184-4473-ba41-5609f6aa29df/openstackclient/0.log" Dec 15 06:38:41 crc kubenswrapper[4747]: I1215 06:38:41.387051 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f242c5ef-84fc-4437-86a0-0175e8ea123b/mysql-bootstrap/0.log" Dec 15 06:38:41 crc kubenswrapper[4747]: I1215 06:38:41.515232 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_e45a17a8-29f1-40e2-96ae-f2db0b32407e/nova-metadata-metadata/0.log" Dec 15 06:38:41 crc kubenswrapper[4747]: I1215 06:38:41.732565 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-pl88m_de8a2190-14e4-44fa-a3a7-18182a6b4df6/openstack-network-exporter/0.log" Dec 15 06:38:41 crc kubenswrapper[4747]: I1215 06:38:41.752406 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-b65n4_becaa3b6-8cd5-4e55-9a81-0a21fec0a70b/ovn-controller/0.log" Dec 15 06:38:41 crc kubenswrapper[4747]: I1215 06:38:41.899034 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jmz8h_dca41dd5-5747-42a1-8703-30ae549342b7/ovsdb-server-init/0.log" Dec 15 06:38:42 crc kubenswrapper[4747]: I1215 06:38:42.039504 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jmz8h_dca41dd5-5747-42a1-8703-30ae549342b7/ovsdb-server-init/0.log" Dec 15 06:38:42 crc kubenswrapper[4747]: I1215 06:38:42.040847 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jmz8h_dca41dd5-5747-42a1-8703-30ae549342b7/ovsdb-server/0.log" Dec 15 06:38:42 crc kubenswrapper[4747]: I1215 06:38:42.080014 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jmz8h_dca41dd5-5747-42a1-8703-30ae549342b7/ovs-vswitchd/0.log" Dec 15 06:38:42 crc kubenswrapper[4747]: I1215 06:38:42.137061 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-cppqm_ae0fffb8-5fa6-4351-83ac-e2687b00d983/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Dec 15 06:38:42 crc kubenswrapper[4747]: I1215 06:38:42.192199 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_c9efeed7-bf14-463d-829f-b3e95d8323b2/openstack-network-exporter/0.log" Dec 15 06:38:42 crc kubenswrapper[4747]: I1215 06:38:42.224800 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_c9efeed7-bf14-463d-829f-b3e95d8323b2/ovn-northd/0.log" Dec 15 06:38:42 crc kubenswrapper[4747]: I1215 06:38:42.310637 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_d84a0b88-fbfb-4d28-89e0-5a64b4a1430f/openstack-network-exporter/0.log" Dec 15 06:38:42 crc kubenswrapper[4747]: I1215 06:38:42.374238 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_d84a0b88-fbfb-4d28-89e0-5a64b4a1430f/ovsdbserver-nb/0.log" Dec 15 06:38:42 crc kubenswrapper[4747]: I1215 06:38:42.450495 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_633ee263-eae2-4211-ae9e-d0efd7f7ac2f/openstack-network-exporter/0.log" Dec 15 06:38:42 crc kubenswrapper[4747]: I1215 06:38:42.477515 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_633ee263-eae2-4211-ae9e-d0efd7f7ac2f/ovsdbserver-sb/0.log" Dec 15 06:38:42 crc kubenswrapper[4747]: I1215 06:38:42.597065 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-d77548fc6-2zqkd_18ea26dc-78f1-479e-9e7c-722632f9304d/placement-api/0.log" Dec 15 06:38:42 crc kubenswrapper[4747]: I1215 06:38:42.666895 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-d77548fc6-2zqkd_18ea26dc-78f1-479e-9e7c-722632f9304d/placement-log/0.log" Dec 15 06:38:42 crc kubenswrapper[4747]: I1215 06:38:42.702529 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_42363452-e04c-462e-8341-6f3f99392357/setup-container/0.log" Dec 15 06:38:42 crc kubenswrapper[4747]: I1215 06:38:42.850650 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_42363452-e04c-462e-8341-6f3f99392357/setup-container/0.log" Dec 15 06:38:42 crc kubenswrapper[4747]: I1215 06:38:42.885624 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_42363452-e04c-462e-8341-6f3f99392357/rabbitmq/0.log" Dec 15 06:38:42 crc kubenswrapper[4747]: I1215 06:38:42.915266 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7a527363-fdfb-4bbe-a50e-41923c5cc78c/setup-container/0.log" Dec 15 06:38:43 crc kubenswrapper[4747]: I1215 06:38:43.059275 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7a527363-fdfb-4bbe-a50e-41923c5cc78c/setup-container/0.log" Dec 15 06:38:43 crc kubenswrapper[4747]: I1215 06:38:43.066470 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7a527363-fdfb-4bbe-a50e-41923c5cc78c/rabbitmq/0.log" Dec 15 06:38:43 crc kubenswrapper[4747]: I1215 06:38:43.106891 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-lr6h9_d87302aa-4741-47b7-8126-aaeeb74ace60/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 15 06:38:43 crc kubenswrapper[4747]: I1215 06:38:43.225821 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-nngr7_9a1bff2c-a33c-4816-998e-243617f6e473/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 15 06:38:43 crc kubenswrapper[4747]: I1215 06:38:43.267680 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-kps28_d11ad7a8-e6c0-497a-8a1a-0b82be444a86/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 15 06:38:43 crc kubenswrapper[4747]: I1215 06:38:43.326762 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-jbxq5_3be348eb-7098-4347-b98e-dcf987dd854e/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 15 06:38:43 crc kubenswrapper[4747]: I1215 06:38:43.462748 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-cvt9r_adb1376f-7db9-4946-8843-44313c04df54/ssh-known-hosts-edpm-deployment/0.log" Dec 15 06:38:43 crc kubenswrapper[4747]: I1215 06:38:43.608855 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-84688cc58c-2mrlh_01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8/proxy-server/0.log" Dec 15 06:38:43 crc kubenswrapper[4747]: I1215 06:38:43.635365 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-84688cc58c-2mrlh_01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8/proxy-httpd/0.log" Dec 15 06:38:43 crc kubenswrapper[4747]: I1215 06:38:43.687040 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-f2kl2_6a5299e8-666f-431f-9ecc-5dcc74352e38/swift-ring-rebalance/0.log" Dec 15 06:38:43 crc kubenswrapper[4747]: I1215 06:38:43.809287 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08c6df63-e1b2-4194-9bbe-b07410de16e7/account-reaper/0.log" Dec 15 06:38:43 crc kubenswrapper[4747]: I1215 06:38:43.812643 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08c6df63-e1b2-4194-9bbe-b07410de16e7/account-auditor/0.log" Dec 15 06:38:43 crc kubenswrapper[4747]: I1215 06:38:43.876790 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08c6df63-e1b2-4194-9bbe-b07410de16e7/account-replicator/0.log" Dec 15 06:38:43 crc kubenswrapper[4747]: I1215 06:38:43.883717 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08c6df63-e1b2-4194-9bbe-b07410de16e7/account-server/0.log" Dec 15 06:38:43 crc kubenswrapper[4747]: I1215 06:38:43.967536 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08c6df63-e1b2-4194-9bbe-b07410de16e7/container-auditor/0.log" Dec 15 06:38:43 crc kubenswrapper[4747]: I1215 06:38:43.997053 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08c6df63-e1b2-4194-9bbe-b07410de16e7/container-replicator/0.log" Dec 15 06:38:44 crc kubenswrapper[4747]: I1215 06:38:44.001598 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08c6df63-e1b2-4194-9bbe-b07410de16e7/container-server/0.log" Dec 15 06:38:44 crc kubenswrapper[4747]: I1215 06:38:44.046661 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08c6df63-e1b2-4194-9bbe-b07410de16e7/container-updater/0.log" Dec 15 06:38:44 crc kubenswrapper[4747]: I1215 06:38:44.093943 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08c6df63-e1b2-4194-9bbe-b07410de16e7/object-auditor/0.log" Dec 15 06:38:44 crc kubenswrapper[4747]: I1215 06:38:44.108728 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08c6df63-e1b2-4194-9bbe-b07410de16e7/object-expirer/0.log" Dec 15 06:38:44 crc kubenswrapper[4747]: I1215 06:38:44.177164 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08c6df63-e1b2-4194-9bbe-b07410de16e7/object-replicator/0.log" Dec 15 06:38:44 crc kubenswrapper[4747]: I1215 06:38:44.180908 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08c6df63-e1b2-4194-9bbe-b07410de16e7/object-server/0.log" Dec 15 06:38:44 crc kubenswrapper[4747]: I1215 06:38:44.247494 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08c6df63-e1b2-4194-9bbe-b07410de16e7/object-updater/0.log" Dec 15 06:38:44 crc kubenswrapper[4747]: I1215 06:38:44.282984 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08c6df63-e1b2-4194-9bbe-b07410de16e7/rsync/0.log" Dec 15 06:38:44 crc kubenswrapper[4747]: I1215 06:38:44.315040 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_08c6df63-e1b2-4194-9bbe-b07410de16e7/swift-recon-cron/0.log" Dec 15 06:38:44 crc kubenswrapper[4747]: I1215 06:38:44.440718 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-f674t_a7d200be-a60e-4759-8772-1845c1ab0534/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Dec 15 06:38:44 crc kubenswrapper[4747]: I1215 06:38:44.514788 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_0feaf663-b187-479f-8129-5aa6bf3b9047/tempest-tests-tempest-tests-runner/0.log" Dec 15 06:38:44 crc kubenswrapper[4747]: I1215 06:38:44.586856 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_b3fb43ea-8919-4b3f-bfd7-27ee6d7e8a0b/test-operator-logs-container/0.log" Dec 15 06:38:44 crc kubenswrapper[4747]: I1215 06:38:44.641548 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-7rcwr_89aec499-875b-4b3b-8486-b01d8713b1c6/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 15 06:39:04 crc kubenswrapper[4747]: I1215 06:39:04.219996 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-95949466-pzsnr_966a3797-97c2-4e8d-8799-6b8a287efd78/manager/0.log" Dec 15 06:39:04 crc kubenswrapper[4747]: I1215 06:39:04.351550 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-66f8b87655-rkqrw_a9d4c90d-ecd6-4126-8d91-dfb784a64d54/manager/0.log" Dec 15 06:39:04 crc kubenswrapper[4747]: I1215 06:39:04.359184 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5cf45c46bd-ggkl6_50d161a9-2162-4642-bfd4-74bde1129134/manager/0.log" Dec 15 06:39:04 crc kubenswrapper[4747]: I1215 06:39:04.642706 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56fmjvsl_2c345b7d-bd2d-43c7-9f3f-906a003a24e5/util/0.log" Dec 15 06:39:04 crc kubenswrapper[4747]: I1215 06:39:04.829395 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56fmjvsl_2c345b7d-bd2d-43c7-9f3f-906a003a24e5/util/0.log" Dec 15 06:39:04 crc kubenswrapper[4747]: I1215 06:39:04.833753 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56fmjvsl_2c345b7d-bd2d-43c7-9f3f-906a003a24e5/pull/0.log" Dec 15 06:39:04 crc kubenswrapper[4747]: I1215 06:39:04.861707 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56fmjvsl_2c345b7d-bd2d-43c7-9f3f-906a003a24e5/pull/0.log" Dec 15 06:39:05 crc kubenswrapper[4747]: I1215 06:39:05.017919 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56fmjvsl_2c345b7d-bd2d-43c7-9f3f-906a003a24e5/util/0.log" Dec 15 06:39:05 crc kubenswrapper[4747]: I1215 06:39:05.024257 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56fmjvsl_2c345b7d-bd2d-43c7-9f3f-906a003a24e5/extract/0.log" Dec 15 06:39:05 crc kubenswrapper[4747]: I1215 06:39:05.055304 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56fmjvsl_2c345b7d-bd2d-43c7-9f3f-906a003a24e5/pull/0.log" Dec 15 06:39:05 crc kubenswrapper[4747]: I1215 06:39:05.214792 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-59b8dcb766-tcs4c_07926291-631c-415d-8aaa-c425852decd9/manager/0.log" Dec 15 06:39:05 crc kubenswrapper[4747]: I1215 06:39:05.250199 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-767f9d7567-hk9c4_ed7a99f7-83b8-48f4-9cc9-135af2e16529/manager/0.log" Dec 15 06:39:05 crc kubenswrapper[4747]: I1215 06:39:05.402681 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6ccf486b9-cmgcn_c8a35ff2-385b-46d4-95e6-d7e85a7c8477/manager/0.log" Dec 15 06:39:05 crc kubenswrapper[4747]: I1215 06:39:05.620293 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-f458558d7-vf58x_b93e01ce-98e3-4941-8721-d9ce67414730/manager/0.log" Dec 15 06:39:05 crc kubenswrapper[4747]: I1215 06:39:05.638772 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-58944d7758-s79wq_bb8f1731-54b2-4d71-96fb-13fde067045b/manager/0.log" Dec 15 06:39:05 crc kubenswrapper[4747]: I1215 06:39:05.801887 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-5c7cbf548f-v7cjm_fdda9bcd-0316-4549-af8b-ae0e151e59d7/manager/0.log" Dec 15 06:39:05 crc kubenswrapper[4747]: I1215 06:39:05.822683 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5fdd9786f7-dg8cj_e6558c12-d59f-4593-9605-a7dc6c19e766/manager/0.log" Dec 15 06:39:06 crc kubenswrapper[4747]: I1215 06:39:06.032610 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-f76f4954c-5chln_dc8104ce-563e-4e6f-b61d-18e2bdc49879/manager/0.log" Dec 15 06:39:06 crc kubenswrapper[4747]: I1215 06:39:06.038894 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7cd87b778f-qw6tr_5f14ea23-34de-4d4b-971d-dc90d34c44a9/manager/0.log" Dec 15 06:39:06 crc kubenswrapper[4747]: I1215 06:39:06.214231 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5fbbf8b6cc-snvkz_60924e24-00f9-4f6a-bf7e-385f8e54a027/manager/0.log" Dec 15 06:39:06 crc kubenswrapper[4747]: I1215 06:39:06.215819 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-68c649d9d-sffcl_5a07861b-82a4-47c3-8255-3b76b44da9d6/manager/0.log" Dec 15 06:39:06 crc kubenswrapper[4747]: I1215 06:39:06.407979 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-689f887b54sfqvx_3858e881-df69-47eb-8a78-fa48f7ca7f87/manager/0.log" Dec 15 06:39:06 crc kubenswrapper[4747]: I1215 06:39:06.883971 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-57bbbf4567-4l6vr_e1d8f4a6-dd71-427f-98ac-5e77cc0fb1ae/operator/0.log" Dec 15 06:39:06 crc kubenswrapper[4747]: I1215 06:39:06.932887 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-ssgvg_600db6fb-c49e-40e5-a195-756c80b40b7d/registry-server/0.log" Dec 15 06:39:07 crc kubenswrapper[4747]: I1215 06:39:07.148601 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-bf6d4f946-jmxtj_e1cafba6-81fa-4f70-b79d-4d02cdd194a3/manager/0.log" Dec 15 06:39:07 crc kubenswrapper[4747]: I1215 06:39:07.153822 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8665b56d78-c2gjc_c1d38621-ff5b-4d92-8457-9568c6b67416/manager/0.log" Dec 15 06:39:07 crc kubenswrapper[4747]: I1215 06:39:07.422817 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-vtm79_3818fc80-b8e4-4dc2-9470-587cf10a2350/operator/0.log" Dec 15 06:39:07 crc kubenswrapper[4747]: I1215 06:39:07.465613 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5c6df8f9-tm9tq_4e1be8a6-df60-418b-911f-efbf8aa5cf5a/manager/0.log" Dec 15 06:39:07 crc kubenswrapper[4747]: I1215 06:39:07.681890 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-97d456b9-gqlwk_df77558c-ad92-43a1-9d9a-e3fac782b0e8/manager/0.log" Dec 15 06:39:07 crc kubenswrapper[4747]: I1215 06:39:07.687236 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-56f6fbdf6-ch5s4_e3f1bf4c-044b-49d5-be51-b853e2f6a7b0/manager/0.log" Dec 15 06:39:07 crc kubenswrapper[4747]: I1215 06:39:07.692508 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-756ccf86c7-6dlgk_3f5c0d61-d8f5-4bfb-87c1-4f795057abd2/manager/0.log" Dec 15 06:39:07 crc kubenswrapper[4747]: I1215 06:39:07.864256 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-55f78b7c4c-rgxgj_2e8d5dd7-baa6-49fb-9f9f-735905ac6e61/manager/0.log" Dec 15 06:39:25 crc kubenswrapper[4747]: I1215 06:39:25.203583 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-xkbmm_f139e81b-c534-4004-81b1-202a6b0e45f2/control-plane-machine-set-operator/0.log" Dec 15 06:39:25 crc kubenswrapper[4747]: I1215 06:39:25.337883 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-g46rv_efb2d24d-c4bc-4f9a-ae33-a7f3e6090a39/kube-rbac-proxy/0.log" Dec 15 06:39:25 crc kubenswrapper[4747]: I1215 06:39:25.393782 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-g46rv_efb2d24d-c4bc-4f9a-ae33-a7f3e6090a39/machine-api-operator/0.log" Dec 15 06:39:37 crc kubenswrapper[4747]: I1215 06:39:37.192681 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-bwhbl_2dc6869c-9693-4dc8-81eb-4ff08e334aaf/cert-manager-controller/0.log" Dec 15 06:39:37 crc kubenswrapper[4747]: I1215 06:39:37.528050 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-t5flq_7e3304fa-a54c-4472-935a-aad6d8673d12/cert-manager-cainjector/0.log" Dec 15 06:39:37 crc kubenswrapper[4747]: I1215 06:39:37.528099 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-qmrz8_56932a48-4e8d-4052-b33e-daff9aeec190/cert-manager-webhook/0.log" Dec 15 06:39:48 crc kubenswrapper[4747]: I1215 06:39:48.032536 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6ff7998486-tjfnm_8f1ea057-6f84-40ec-be2e-54583b3af99b/nmstate-console-plugin/0.log" Dec 15 06:39:48 crc kubenswrapper[4747]: I1215 06:39:48.186913 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-4vz6n_cc636dc4-0911-423b-8327-5b81d759c74a/nmstate-handler/0.log" Dec 15 06:39:48 crc kubenswrapper[4747]: I1215 06:39:48.203363 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-pm75r_b93f5376-bd46-4dc1-82aa-6b1db7622176/kube-rbac-proxy/0.log" Dec 15 06:39:48 crc kubenswrapper[4747]: I1215 06:39:48.217428 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-pm75r_b93f5376-bd46-4dc1-82aa-6b1db7622176/nmstate-metrics/0.log" Dec 15 06:39:48 crc kubenswrapper[4747]: I1215 06:39:48.380985 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-f8fb84555-fns29_37453676-6389-4503-b1dc-9afdbd759c64/nmstate-webhook/0.log" Dec 15 06:39:48 crc kubenswrapper[4747]: I1215 06:39:48.387457 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-6769fb99d-6b7xv_e1f942fd-4913-456f-b28c-463fd3c2759e/nmstate-operator/0.log" Dec 15 06:40:00 crc kubenswrapper[4747]: I1215 06:40:00.511129 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-84gxw_2fd21575-3653-416d-a59a-d2802bc9bf09/kube-rbac-proxy/0.log" Dec 15 06:40:00 crc kubenswrapper[4747]: I1215 06:40:00.604684 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-84gxw_2fd21575-3653-416d-a59a-d2802bc9bf09/controller/0.log" Dec 15 06:40:00 crc kubenswrapper[4747]: I1215 06:40:00.671819 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d98xw_cf41786d-c244-4754-ba59-4a9b6c834f9f/cp-frr-files/0.log" Dec 15 06:40:00 crc kubenswrapper[4747]: I1215 06:40:00.870092 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d98xw_cf41786d-c244-4754-ba59-4a9b6c834f9f/cp-reloader/0.log" Dec 15 06:40:00 crc kubenswrapper[4747]: I1215 06:40:00.881496 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d98xw_cf41786d-c244-4754-ba59-4a9b6c834f9f/cp-frr-files/0.log" Dec 15 06:40:00 crc kubenswrapper[4747]: I1215 06:40:00.929082 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d98xw_cf41786d-c244-4754-ba59-4a9b6c834f9f/cp-reloader/0.log" Dec 15 06:40:00 crc kubenswrapper[4747]: I1215 06:40:00.929854 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d98xw_cf41786d-c244-4754-ba59-4a9b6c834f9f/cp-metrics/0.log" Dec 15 06:40:01 crc kubenswrapper[4747]: I1215 06:40:01.071561 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d98xw_cf41786d-c244-4754-ba59-4a9b6c834f9f/cp-reloader/0.log" Dec 15 06:40:01 crc kubenswrapper[4747]: I1215 06:40:01.078918 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d98xw_cf41786d-c244-4754-ba59-4a9b6c834f9f/cp-frr-files/0.log" Dec 15 06:40:01 crc kubenswrapper[4747]: I1215 06:40:01.085744 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d98xw_cf41786d-c244-4754-ba59-4a9b6c834f9f/cp-metrics/0.log" Dec 15 06:40:01 crc kubenswrapper[4747]: I1215 06:40:01.116338 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d98xw_cf41786d-c244-4754-ba59-4a9b6c834f9f/cp-metrics/0.log" Dec 15 06:40:01 crc kubenswrapper[4747]: I1215 06:40:01.286026 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d98xw_cf41786d-c244-4754-ba59-4a9b6c834f9f/cp-frr-files/0.log" Dec 15 06:40:01 crc kubenswrapper[4747]: I1215 06:40:01.288097 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d98xw_cf41786d-c244-4754-ba59-4a9b6c834f9f/cp-metrics/0.log" Dec 15 06:40:01 crc kubenswrapper[4747]: I1215 06:40:01.293030 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d98xw_cf41786d-c244-4754-ba59-4a9b6c834f9f/controller/0.log" Dec 15 06:40:01 crc kubenswrapper[4747]: I1215 06:40:01.300719 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d98xw_cf41786d-c244-4754-ba59-4a9b6c834f9f/cp-reloader/0.log" Dec 15 06:40:01 crc kubenswrapper[4747]: I1215 06:40:01.426202 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d98xw_cf41786d-c244-4754-ba59-4a9b6c834f9f/kube-rbac-proxy/0.log" Dec 15 06:40:01 crc kubenswrapper[4747]: I1215 06:40:01.476576 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d98xw_cf41786d-c244-4754-ba59-4a9b6c834f9f/kube-rbac-proxy-frr/0.log" Dec 15 06:40:01 crc kubenswrapper[4747]: I1215 06:40:01.498144 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d98xw_cf41786d-c244-4754-ba59-4a9b6c834f9f/frr-metrics/0.log" Dec 15 06:40:01 crc kubenswrapper[4747]: I1215 06:40:01.653980 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7784b6fcf-szbwq_7b46267f-c728-4995-9817-87b793f77a58/frr-k8s-webhook-server/0.log" Dec 15 06:40:01 crc kubenswrapper[4747]: I1215 06:40:01.674845 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d98xw_cf41786d-c244-4754-ba59-4a9b6c834f9f/reloader/0.log" Dec 15 06:40:01 crc kubenswrapper[4747]: I1215 06:40:01.922134 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-75d987bf4c-9srrp_adb7b699-78a1-41ed-a24f-2c57a128568e/manager/0.log" Dec 15 06:40:02 crc kubenswrapper[4747]: I1215 06:40:02.076880 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-rtnx5_b943df0c-29b6-42f3-884b-707aaf02c5d0/kube-rbac-proxy/0.log" Dec 15 06:40:02 crc kubenswrapper[4747]: I1215 06:40:02.139703 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5c5c75c884-t8trz_3a13fc24-266f-433f-bbff-0cd3d1fc29fc/webhook-server/0.log" Dec 15 06:40:02 crc kubenswrapper[4747]: I1215 06:40:02.739787 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d98xw_cf41786d-c244-4754-ba59-4a9b6c834f9f/frr/0.log" Dec 15 06:40:02 crc kubenswrapper[4747]: I1215 06:40:02.757691 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-rtnx5_b943df0c-29b6-42f3-884b-707aaf02c5d0/speaker/0.log" Dec 15 06:40:13 crc kubenswrapper[4747]: I1215 06:40:13.686613 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4sdz5f_5049975b-44d3-44ef-98d8-94691dcb042f/util/0.log" Dec 15 06:40:13 crc kubenswrapper[4747]: I1215 06:40:13.792192 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4sdz5f_5049975b-44d3-44ef-98d8-94691dcb042f/util/0.log" Dec 15 06:40:13 crc kubenswrapper[4747]: I1215 06:40:13.833734 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4sdz5f_5049975b-44d3-44ef-98d8-94691dcb042f/pull/0.log" Dec 15 06:40:13 crc kubenswrapper[4747]: I1215 06:40:13.860408 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4sdz5f_5049975b-44d3-44ef-98d8-94691dcb042f/pull/0.log" Dec 15 06:40:13 crc kubenswrapper[4747]: I1215 06:40:13.999240 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4sdz5f_5049975b-44d3-44ef-98d8-94691dcb042f/pull/0.log" Dec 15 06:40:14 crc kubenswrapper[4747]: I1215 06:40:14.003778 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4sdz5f_5049975b-44d3-44ef-98d8-94691dcb042f/util/0.log" Dec 15 06:40:14 crc kubenswrapper[4747]: I1215 06:40:14.019794 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4sdz5f_5049975b-44d3-44ef-98d8-94691dcb042f/extract/0.log" Dec 15 06:40:14 crc kubenswrapper[4747]: I1215 06:40:14.158621 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh_30696d2b-dd70-4eb7-88c1-9bc23b39c07c/util/0.log" Dec 15 06:40:14 crc kubenswrapper[4747]: I1215 06:40:14.278872 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh_30696d2b-dd70-4eb7-88c1-9bc23b39c07c/util/0.log" Dec 15 06:40:14 crc kubenswrapper[4747]: I1215 06:40:14.284120 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh_30696d2b-dd70-4eb7-88c1-9bc23b39c07c/pull/0.log" Dec 15 06:40:14 crc kubenswrapper[4747]: I1215 06:40:14.288429 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh_30696d2b-dd70-4eb7-88c1-9bc23b39c07c/pull/0.log" Dec 15 06:40:14 crc kubenswrapper[4747]: I1215 06:40:14.440775 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh_30696d2b-dd70-4eb7-88c1-9bc23b39c07c/util/0.log" Dec 15 06:40:14 crc kubenswrapper[4747]: I1215 06:40:14.453679 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh_30696d2b-dd70-4eb7-88c1-9bc23b39c07c/extract/0.log" Dec 15 06:40:14 crc kubenswrapper[4747]: I1215 06:40:14.475317 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8vqbwh_30696d2b-dd70-4eb7-88c1-9bc23b39c07c/pull/0.log" Dec 15 06:40:14 crc kubenswrapper[4747]: I1215 06:40:14.585212 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vzrzt_a54ad897-346d-40bf-8b62-df432709d572/extract-utilities/0.log" Dec 15 06:40:14 crc kubenswrapper[4747]: I1215 06:40:14.738548 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vzrzt_a54ad897-346d-40bf-8b62-df432709d572/extract-content/0.log" Dec 15 06:40:14 crc kubenswrapper[4747]: I1215 06:40:14.740035 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vzrzt_a54ad897-346d-40bf-8b62-df432709d572/extract-utilities/0.log" Dec 15 06:40:14 crc kubenswrapper[4747]: I1215 06:40:14.780249 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vzrzt_a54ad897-346d-40bf-8b62-df432709d572/extract-content/0.log" Dec 15 06:40:14 crc kubenswrapper[4747]: I1215 06:40:14.945298 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vzrzt_a54ad897-346d-40bf-8b62-df432709d572/extract-content/0.log" Dec 15 06:40:14 crc kubenswrapper[4747]: I1215 06:40:14.945304 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vzrzt_a54ad897-346d-40bf-8b62-df432709d572/extract-utilities/0.log" Dec 15 06:40:15 crc kubenswrapper[4747]: I1215 06:40:15.145002 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dljqm_61ed4e5e-9fba-404a-8e7e-e231ee5d7134/extract-utilities/0.log" Dec 15 06:40:15 crc kubenswrapper[4747]: I1215 06:40:15.382472 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dljqm_61ed4e5e-9fba-404a-8e7e-e231ee5d7134/extract-content/0.log" Dec 15 06:40:15 crc kubenswrapper[4747]: I1215 06:40:15.404823 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vzrzt_a54ad897-346d-40bf-8b62-df432709d572/registry-server/0.log" Dec 15 06:40:15 crc kubenswrapper[4747]: I1215 06:40:15.430712 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dljqm_61ed4e5e-9fba-404a-8e7e-e231ee5d7134/extract-utilities/0.log" Dec 15 06:40:15 crc kubenswrapper[4747]: I1215 06:40:15.434861 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dljqm_61ed4e5e-9fba-404a-8e7e-e231ee5d7134/extract-content/0.log" Dec 15 06:40:15 crc kubenswrapper[4747]: I1215 06:40:15.534345 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dljqm_61ed4e5e-9fba-404a-8e7e-e231ee5d7134/extract-utilities/0.log" Dec 15 06:40:15 crc kubenswrapper[4747]: I1215 06:40:15.545718 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dljqm_61ed4e5e-9fba-404a-8e7e-e231ee5d7134/extract-content/0.log" Dec 15 06:40:15 crc kubenswrapper[4747]: I1215 06:40:15.745642 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-qr5lt_f22206aa-87c5-4c96-b146-53b0890697fa/marketplace-operator/0.log" Dec 15 06:40:15 crc kubenswrapper[4747]: I1215 06:40:15.844329 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r755m_efb301de-15d1-452a-b8e9-10296872545b/extract-utilities/0.log" Dec 15 06:40:15 crc kubenswrapper[4747]: I1215 06:40:15.984904 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dljqm_61ed4e5e-9fba-404a-8e7e-e231ee5d7134/registry-server/0.log" Dec 15 06:40:16 crc kubenswrapper[4747]: I1215 06:40:16.063629 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r755m_efb301de-15d1-452a-b8e9-10296872545b/extract-content/0.log" Dec 15 06:40:16 crc kubenswrapper[4747]: I1215 06:40:16.067496 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r755m_efb301de-15d1-452a-b8e9-10296872545b/extract-content/0.log" Dec 15 06:40:16 crc kubenswrapper[4747]: I1215 06:40:16.083059 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r755m_efb301de-15d1-452a-b8e9-10296872545b/extract-utilities/0.log" Dec 15 06:40:16 crc kubenswrapper[4747]: I1215 06:40:16.230910 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r755m_efb301de-15d1-452a-b8e9-10296872545b/extract-utilities/0.log" Dec 15 06:40:16 crc kubenswrapper[4747]: I1215 06:40:16.243191 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r755m_efb301de-15d1-452a-b8e9-10296872545b/extract-content/0.log" Dec 15 06:40:16 crc kubenswrapper[4747]: I1215 06:40:16.375797 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-r755m_efb301de-15d1-452a-b8e9-10296872545b/registry-server/0.log" Dec 15 06:40:16 crc kubenswrapper[4747]: I1215 06:40:16.411360 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lbw8g_e2652452-9d91-4f09-9422-fa69bed43b9e/extract-utilities/0.log" Dec 15 06:40:16 crc kubenswrapper[4747]: I1215 06:40:16.590345 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lbw8g_e2652452-9d91-4f09-9422-fa69bed43b9e/extract-content/0.log" Dec 15 06:40:16 crc kubenswrapper[4747]: I1215 06:40:16.616802 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lbw8g_e2652452-9d91-4f09-9422-fa69bed43b9e/extract-utilities/0.log" Dec 15 06:40:16 crc kubenswrapper[4747]: I1215 06:40:16.627081 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lbw8g_e2652452-9d91-4f09-9422-fa69bed43b9e/extract-content/0.log" Dec 15 06:40:16 crc kubenswrapper[4747]: I1215 06:40:16.737804 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lbw8g_e2652452-9d91-4f09-9422-fa69bed43b9e/extract-utilities/0.log" Dec 15 06:40:16 crc kubenswrapper[4747]: I1215 06:40:16.742120 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lbw8g_e2652452-9d91-4f09-9422-fa69bed43b9e/extract-content/0.log" Dec 15 06:40:17 crc kubenswrapper[4747]: I1215 06:40:17.105842 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lbw8g_e2652452-9d91-4f09-9422-fa69bed43b9e/registry-server/0.log" Dec 15 06:40:28 crc kubenswrapper[4747]: I1215 06:40:28.865075 4747 patch_prober.go:28] interesting pod/machine-config-daemon-nldtn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 15 06:40:28 crc kubenswrapper[4747]: I1215 06:40:28.865623 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 15 06:40:35 crc kubenswrapper[4747]: I1215 06:40:35.721220 4747 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-84688cc58c-2mrlh" podUID="01f6a5bb-ed36-44f2-b5be-5d3b235ca4e8" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Dec 15 06:40:58 crc kubenswrapper[4747]: I1215 06:40:58.865209 4747 patch_prober.go:28] interesting pod/machine-config-daemon-nldtn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 15 06:40:58 crc kubenswrapper[4747]: I1215 06:40:58.865837 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 15 06:41:28 crc kubenswrapper[4747]: I1215 06:41:28.865398 4747 patch_prober.go:28] interesting pod/machine-config-daemon-nldtn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 15 06:41:28 crc kubenswrapper[4747]: I1215 06:41:28.867039 4747 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 15 06:41:28 crc kubenswrapper[4747]: I1215 06:41:28.867169 4747 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" Dec 15 06:41:28 crc kubenswrapper[4747]: I1215 06:41:28.868126 4747 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"04957b75e1647a0c1079a2918e34078c34fa86cbc8c0b1900884016cbe7c9c82"} pod="openshift-machine-config-operator/machine-config-daemon-nldtn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 15 06:41:28 crc kubenswrapper[4747]: I1215 06:41:28.868268 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerName="machine-config-daemon" containerID="cri-o://04957b75e1647a0c1079a2918e34078c34fa86cbc8c0b1900884016cbe7c9c82" gracePeriod=600 Dec 15 06:41:28 crc kubenswrapper[4747]: E1215 06:41:28.991136 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:41:29 crc kubenswrapper[4747]: I1215 06:41:29.861263 4747 generic.go:334] "Generic (PLEG): container finished" podID="1d50e5c9-7ce9-40c0-b942-01031654d27c" containerID="04957b75e1647a0c1079a2918e34078c34fa86cbc8c0b1900884016cbe7c9c82" exitCode=0 Dec 15 06:41:29 crc kubenswrapper[4747]: I1215 06:41:29.861331 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" event={"ID":"1d50e5c9-7ce9-40c0-b942-01031654d27c","Type":"ContainerDied","Data":"04957b75e1647a0c1079a2918e34078c34fa86cbc8c0b1900884016cbe7c9c82"} Dec 15 06:41:29 crc kubenswrapper[4747]: I1215 06:41:29.861397 4747 scope.go:117] "RemoveContainer" containerID="e21c8b075436a10f5cd74cbf56c983328815d2f43d9184bb57bbdc7f8ffd76c1" Dec 15 06:41:29 crc kubenswrapper[4747]: I1215 06:41:29.861870 4747 scope.go:117] "RemoveContainer" containerID="04957b75e1647a0c1079a2918e34078c34fa86cbc8c0b1900884016cbe7c9c82" Dec 15 06:41:29 crc kubenswrapper[4747]: E1215 06:41:29.862211 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:41:38 crc kubenswrapper[4747]: I1215 06:41:38.933458 4747 generic.go:334] "Generic (PLEG): container finished" podID="d8ac7c8f-2b53-4e95-ba89-b67b7e73ecfa" containerID="c13f33e3f0cc3b6f5f11fb439e639ee0570f0ed2b24536cc05d52cf68e89a5a3" exitCode=0 Dec 15 06:41:38 crc kubenswrapper[4747]: I1215 06:41:38.933665 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mxqhb/must-gather-4f456" event={"ID":"d8ac7c8f-2b53-4e95-ba89-b67b7e73ecfa","Type":"ContainerDied","Data":"c13f33e3f0cc3b6f5f11fb439e639ee0570f0ed2b24536cc05d52cf68e89a5a3"} Dec 15 06:41:38 crc kubenswrapper[4747]: I1215 06:41:38.934660 4747 scope.go:117] "RemoveContainer" containerID="c13f33e3f0cc3b6f5f11fb439e639ee0570f0ed2b24536cc05d52cf68e89a5a3" Dec 15 06:41:39 crc kubenswrapper[4747]: I1215 06:41:39.426730 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mxqhb_must-gather-4f456_d8ac7c8f-2b53-4e95-ba89-b67b7e73ecfa/gather/0.log" Dec 15 06:41:42 crc kubenswrapper[4747]: I1215 06:41:42.630266 4747 scope.go:117] "RemoveContainer" containerID="04957b75e1647a0c1079a2918e34078c34fa86cbc8c0b1900884016cbe7c9c82" Dec 15 06:41:42 crc kubenswrapper[4747]: E1215 06:41:42.631461 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:41:48 crc kubenswrapper[4747]: I1215 06:41:48.819037 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mxqhb/must-gather-4f456"] Dec 15 06:41:48 crc kubenswrapper[4747]: I1215 06:41:48.819813 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-mxqhb/must-gather-4f456" podUID="d8ac7c8f-2b53-4e95-ba89-b67b7e73ecfa" containerName="copy" containerID="cri-o://7450b811132c4062a72595939955387c34a6207ea0bc870ed9b7bdb03414c9a6" gracePeriod=2 Dec 15 06:41:48 crc kubenswrapper[4747]: I1215 06:41:48.829238 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mxqhb/must-gather-4f456"] Dec 15 06:41:49 crc kubenswrapper[4747]: I1215 06:41:49.022440 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mxqhb_must-gather-4f456_d8ac7c8f-2b53-4e95-ba89-b67b7e73ecfa/copy/0.log" Dec 15 06:41:49 crc kubenswrapper[4747]: I1215 06:41:49.022947 4747 generic.go:334] "Generic (PLEG): container finished" podID="d8ac7c8f-2b53-4e95-ba89-b67b7e73ecfa" containerID="7450b811132c4062a72595939955387c34a6207ea0bc870ed9b7bdb03414c9a6" exitCode=143 Dec 15 06:41:49 crc kubenswrapper[4747]: I1215 06:41:49.191995 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mxqhb_must-gather-4f456_d8ac7c8f-2b53-4e95-ba89-b67b7e73ecfa/copy/0.log" Dec 15 06:41:49 crc kubenswrapper[4747]: I1215 06:41:49.192697 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mxqhb/must-gather-4f456" Dec 15 06:41:49 crc kubenswrapper[4747]: I1215 06:41:49.266882 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8wqk\" (UniqueName: \"kubernetes.io/projected/d8ac7c8f-2b53-4e95-ba89-b67b7e73ecfa-kube-api-access-k8wqk\") pod \"d8ac7c8f-2b53-4e95-ba89-b67b7e73ecfa\" (UID: \"d8ac7c8f-2b53-4e95-ba89-b67b7e73ecfa\") " Dec 15 06:41:49 crc kubenswrapper[4747]: I1215 06:41:49.267037 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d8ac7c8f-2b53-4e95-ba89-b67b7e73ecfa-must-gather-output\") pod \"d8ac7c8f-2b53-4e95-ba89-b67b7e73ecfa\" (UID: \"d8ac7c8f-2b53-4e95-ba89-b67b7e73ecfa\") " Dec 15 06:41:49 crc kubenswrapper[4747]: I1215 06:41:49.273074 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8ac7c8f-2b53-4e95-ba89-b67b7e73ecfa-kube-api-access-k8wqk" (OuterVolumeSpecName: "kube-api-access-k8wqk") pod "d8ac7c8f-2b53-4e95-ba89-b67b7e73ecfa" (UID: "d8ac7c8f-2b53-4e95-ba89-b67b7e73ecfa"). InnerVolumeSpecName "kube-api-access-k8wqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 06:41:49 crc kubenswrapper[4747]: I1215 06:41:49.369321 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8wqk\" (UniqueName: \"kubernetes.io/projected/d8ac7c8f-2b53-4e95-ba89-b67b7e73ecfa-kube-api-access-k8wqk\") on node \"crc\" DevicePath \"\"" Dec 15 06:41:49 crc kubenswrapper[4747]: I1215 06:41:49.379286 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8ac7c8f-2b53-4e95-ba89-b67b7e73ecfa-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "d8ac7c8f-2b53-4e95-ba89-b67b7e73ecfa" (UID: "d8ac7c8f-2b53-4e95-ba89-b67b7e73ecfa"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 06:41:49 crc kubenswrapper[4747]: I1215 06:41:49.471113 4747 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d8ac7c8f-2b53-4e95-ba89-b67b7e73ecfa-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 15 06:41:50 crc kubenswrapper[4747]: I1215 06:41:50.038108 4747 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mxqhb_must-gather-4f456_d8ac7c8f-2b53-4e95-ba89-b67b7e73ecfa/copy/0.log" Dec 15 06:41:50 crc kubenswrapper[4747]: I1215 06:41:50.038506 4747 scope.go:117] "RemoveContainer" containerID="7450b811132c4062a72595939955387c34a6207ea0bc870ed9b7bdb03414c9a6" Dec 15 06:41:50 crc kubenswrapper[4747]: I1215 06:41:50.038569 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mxqhb/must-gather-4f456" Dec 15 06:41:50 crc kubenswrapper[4747]: I1215 06:41:50.072070 4747 scope.go:117] "RemoveContainer" containerID="c13f33e3f0cc3b6f5f11fb439e639ee0570f0ed2b24536cc05d52cf68e89a5a3" Dec 15 06:41:50 crc kubenswrapper[4747]: I1215 06:41:50.640676 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8ac7c8f-2b53-4e95-ba89-b67b7e73ecfa" path="/var/lib/kubelet/pods/d8ac7c8f-2b53-4e95-ba89-b67b7e73ecfa/volumes" Dec 15 06:41:54 crc kubenswrapper[4747]: I1215 06:41:54.629869 4747 scope.go:117] "RemoveContainer" containerID="04957b75e1647a0c1079a2918e34078c34fa86cbc8c0b1900884016cbe7c9c82" Dec 15 06:41:54 crc kubenswrapper[4747]: E1215 06:41:54.630583 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:42:09 crc kubenswrapper[4747]: I1215 06:42:09.629402 4747 scope.go:117] "RemoveContainer" containerID="04957b75e1647a0c1079a2918e34078c34fa86cbc8c0b1900884016cbe7c9c82" Dec 15 06:42:09 crc kubenswrapper[4747]: E1215 06:42:09.630291 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:42:24 crc kubenswrapper[4747]: I1215 06:42:24.629281 4747 scope.go:117] "RemoveContainer" containerID="04957b75e1647a0c1079a2918e34078c34fa86cbc8c0b1900884016cbe7c9c82" Dec 15 06:42:24 crc kubenswrapper[4747]: E1215 06:42:24.630082 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:42:37 crc kubenswrapper[4747]: I1215 06:42:37.629665 4747 scope.go:117] "RemoveContainer" containerID="04957b75e1647a0c1079a2918e34078c34fa86cbc8c0b1900884016cbe7c9c82" Dec 15 06:42:37 crc kubenswrapper[4747]: E1215 06:42:37.630731 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:42:51 crc kubenswrapper[4747]: I1215 06:42:51.629677 4747 scope.go:117] "RemoveContainer" containerID="04957b75e1647a0c1079a2918e34078c34fa86cbc8c0b1900884016cbe7c9c82" Dec 15 06:42:51 crc kubenswrapper[4747]: E1215 06:42:51.630902 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:42:55 crc kubenswrapper[4747]: I1215 06:42:55.585531 4747 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9m4d8"] Dec 15 06:42:55 crc kubenswrapper[4747]: E1215 06:42:55.586814 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8ac7c8f-2b53-4e95-ba89-b67b7e73ecfa" containerName="copy" Dec 15 06:42:55 crc kubenswrapper[4747]: I1215 06:42:55.586832 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8ac7c8f-2b53-4e95-ba89-b67b7e73ecfa" containerName="copy" Dec 15 06:42:55 crc kubenswrapper[4747]: E1215 06:42:55.586891 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8ac7c8f-2b53-4e95-ba89-b67b7e73ecfa" containerName="gather" Dec 15 06:42:55 crc kubenswrapper[4747]: I1215 06:42:55.586898 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8ac7c8f-2b53-4e95-ba89-b67b7e73ecfa" containerName="gather" Dec 15 06:42:55 crc kubenswrapper[4747]: E1215 06:42:55.586908 4747 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d446070-5a65-48d3-9f93-a24d589dc62e" containerName="container-00" Dec 15 06:42:55 crc kubenswrapper[4747]: I1215 06:42:55.586914 4747 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d446070-5a65-48d3-9f93-a24d589dc62e" containerName="container-00" Dec 15 06:42:55 crc kubenswrapper[4747]: I1215 06:42:55.587232 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8ac7c8f-2b53-4e95-ba89-b67b7e73ecfa" containerName="copy" Dec 15 06:42:55 crc kubenswrapper[4747]: I1215 06:42:55.587255 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8ac7c8f-2b53-4e95-ba89-b67b7e73ecfa" containerName="gather" Dec 15 06:42:55 crc kubenswrapper[4747]: I1215 06:42:55.587277 4747 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d446070-5a65-48d3-9f93-a24d589dc62e" containerName="container-00" Dec 15 06:42:55 crc kubenswrapper[4747]: I1215 06:42:55.589172 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9m4d8" Dec 15 06:42:55 crc kubenswrapper[4747]: I1215 06:42:55.599398 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9m4d8"] Dec 15 06:42:55 crc kubenswrapper[4747]: I1215 06:42:55.769297 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl77j\" (UniqueName: \"kubernetes.io/projected/8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f-kube-api-access-xl77j\") pod \"redhat-operators-9m4d8\" (UID: \"8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f\") " pod="openshift-marketplace/redhat-operators-9m4d8" Dec 15 06:42:55 crc kubenswrapper[4747]: I1215 06:42:55.769376 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f-catalog-content\") pod \"redhat-operators-9m4d8\" (UID: \"8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f\") " pod="openshift-marketplace/redhat-operators-9m4d8" Dec 15 06:42:55 crc kubenswrapper[4747]: I1215 06:42:55.769841 4747 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f-utilities\") pod \"redhat-operators-9m4d8\" (UID: \"8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f\") " pod="openshift-marketplace/redhat-operators-9m4d8" Dec 15 06:42:55 crc kubenswrapper[4747]: I1215 06:42:55.872115 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f-utilities\") pod \"redhat-operators-9m4d8\" (UID: \"8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f\") " pod="openshift-marketplace/redhat-operators-9m4d8" Dec 15 06:42:55 crc kubenswrapper[4747]: I1215 06:42:55.872309 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xl77j\" (UniqueName: \"kubernetes.io/projected/8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f-kube-api-access-xl77j\") pod \"redhat-operators-9m4d8\" (UID: \"8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f\") " pod="openshift-marketplace/redhat-operators-9m4d8" Dec 15 06:42:55 crc kubenswrapper[4747]: I1215 06:42:55.872348 4747 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f-catalog-content\") pod \"redhat-operators-9m4d8\" (UID: \"8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f\") " pod="openshift-marketplace/redhat-operators-9m4d8" Dec 15 06:42:55 crc kubenswrapper[4747]: I1215 06:42:55.872641 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f-utilities\") pod \"redhat-operators-9m4d8\" (UID: \"8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f\") " pod="openshift-marketplace/redhat-operators-9m4d8" Dec 15 06:42:55 crc kubenswrapper[4747]: I1215 06:42:55.872860 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f-catalog-content\") pod \"redhat-operators-9m4d8\" (UID: \"8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f\") " pod="openshift-marketplace/redhat-operators-9m4d8" Dec 15 06:42:55 crc kubenswrapper[4747]: I1215 06:42:55.890528 4747 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl77j\" (UniqueName: \"kubernetes.io/projected/8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f-kube-api-access-xl77j\") pod \"redhat-operators-9m4d8\" (UID: \"8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f\") " pod="openshift-marketplace/redhat-operators-9m4d8" Dec 15 06:42:55 crc kubenswrapper[4747]: I1215 06:42:55.904514 4747 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9m4d8" Dec 15 06:42:56 crc kubenswrapper[4747]: I1215 06:42:56.309445 4747 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9m4d8"] Dec 15 06:42:56 crc kubenswrapper[4747]: I1215 06:42:56.611140 4747 generic.go:334] "Generic (PLEG): container finished" podID="8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f" containerID="81e76935b7cdd06b6827738da1838770d2b12507667576aa9ce04daf183fe8cb" exitCode=0 Dec 15 06:42:56 crc kubenswrapper[4747]: I1215 06:42:56.611209 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9m4d8" event={"ID":"8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f","Type":"ContainerDied","Data":"81e76935b7cdd06b6827738da1838770d2b12507667576aa9ce04daf183fe8cb"} Dec 15 06:42:56 crc kubenswrapper[4747]: I1215 06:42:56.611242 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9m4d8" event={"ID":"8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f","Type":"ContainerStarted","Data":"5ea40f288600fdd040ce082f7184641e4ba0b6b257733d18e9ed0e34b9d15f1f"} Dec 15 06:42:56 crc kubenswrapper[4747]: I1215 06:42:56.612957 4747 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 15 06:42:58 crc kubenswrapper[4747]: I1215 06:42:58.635751 4747 generic.go:334] "Generic (PLEG): container finished" podID="8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f" containerID="198e3670d1b34a4ab44ebf67f619a176997b9eea8bbae158beee6406fec62f56" exitCode=0 Dec 15 06:42:58 crc kubenswrapper[4747]: I1215 06:42:58.639895 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9m4d8" event={"ID":"8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f","Type":"ContainerDied","Data":"198e3670d1b34a4ab44ebf67f619a176997b9eea8bbae158beee6406fec62f56"} Dec 15 06:42:59 crc kubenswrapper[4747]: I1215 06:42:59.650047 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9m4d8" event={"ID":"8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f","Type":"ContainerStarted","Data":"49572669483d0245d824f84d5db355706620a14b91c0c2c544bc71def1164720"} Dec 15 06:42:59 crc kubenswrapper[4747]: I1215 06:42:59.675954 4747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9m4d8" podStartSLOduration=2.114684014 podStartE2EDuration="4.675936087s" podCreationTimestamp="2025-12-15 06:42:55 +0000 UTC" firstStartedPulling="2025-12-15 06:42:56.612709865 +0000 UTC m=+3940.309221781" lastFinishedPulling="2025-12-15 06:42:59.173961937 +0000 UTC m=+3942.870473854" observedRunningTime="2025-12-15 06:42:59.667781835 +0000 UTC m=+3943.364293752" watchObservedRunningTime="2025-12-15 06:42:59.675936087 +0000 UTC m=+3943.372448003" Dec 15 06:43:02 crc kubenswrapper[4747]: I1215 06:43:02.629204 4747 scope.go:117] "RemoveContainer" containerID="04957b75e1647a0c1079a2918e34078c34fa86cbc8c0b1900884016cbe7c9c82" Dec 15 06:43:02 crc kubenswrapper[4747]: E1215 06:43:02.630347 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:43:05 crc kubenswrapper[4747]: I1215 06:43:05.905839 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9m4d8" Dec 15 06:43:05 crc kubenswrapper[4747]: I1215 06:43:05.906680 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9m4d8" Dec 15 06:43:05 crc kubenswrapper[4747]: I1215 06:43:05.947985 4747 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9m4d8" Dec 15 06:43:06 crc kubenswrapper[4747]: I1215 06:43:06.744701 4747 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9m4d8" Dec 15 06:43:06 crc kubenswrapper[4747]: I1215 06:43:06.782433 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9m4d8"] Dec 15 06:43:08 crc kubenswrapper[4747]: I1215 06:43:08.724368 4747 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9m4d8" podUID="8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f" containerName="registry-server" containerID="cri-o://49572669483d0245d824f84d5db355706620a14b91c0c2c544bc71def1164720" gracePeriod=2 Dec 15 06:43:09 crc kubenswrapper[4747]: I1215 06:43:09.733582 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9m4d8" Dec 15 06:43:09 crc kubenswrapper[4747]: I1215 06:43:09.737339 4747 generic.go:334] "Generic (PLEG): container finished" podID="8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f" containerID="49572669483d0245d824f84d5db355706620a14b91c0c2c544bc71def1164720" exitCode=0 Dec 15 06:43:09 crc kubenswrapper[4747]: I1215 06:43:09.737405 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9m4d8" event={"ID":"8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f","Type":"ContainerDied","Data":"49572669483d0245d824f84d5db355706620a14b91c0c2c544bc71def1164720"} Dec 15 06:43:09 crc kubenswrapper[4747]: I1215 06:43:09.737450 4747 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9m4d8" event={"ID":"8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f","Type":"ContainerDied","Data":"5ea40f288600fdd040ce082f7184641e4ba0b6b257733d18e9ed0e34b9d15f1f"} Dec 15 06:43:09 crc kubenswrapper[4747]: I1215 06:43:09.737474 4747 scope.go:117] "RemoveContainer" containerID="49572669483d0245d824f84d5db355706620a14b91c0c2c544bc71def1164720" Dec 15 06:43:09 crc kubenswrapper[4747]: I1215 06:43:09.765055 4747 scope.go:117] "RemoveContainer" containerID="198e3670d1b34a4ab44ebf67f619a176997b9eea8bbae158beee6406fec62f56" Dec 15 06:43:09 crc kubenswrapper[4747]: I1215 06:43:09.787354 4747 scope.go:117] "RemoveContainer" containerID="81e76935b7cdd06b6827738da1838770d2b12507667576aa9ce04daf183fe8cb" Dec 15 06:43:09 crc kubenswrapper[4747]: I1215 06:43:09.812202 4747 scope.go:117] "RemoveContainer" containerID="49572669483d0245d824f84d5db355706620a14b91c0c2c544bc71def1164720" Dec 15 06:43:09 crc kubenswrapper[4747]: E1215 06:43:09.817666 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49572669483d0245d824f84d5db355706620a14b91c0c2c544bc71def1164720\": container with ID starting with 49572669483d0245d824f84d5db355706620a14b91c0c2c544bc71def1164720 not found: ID does not exist" containerID="49572669483d0245d824f84d5db355706620a14b91c0c2c544bc71def1164720" Dec 15 06:43:09 crc kubenswrapper[4747]: I1215 06:43:09.817701 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49572669483d0245d824f84d5db355706620a14b91c0c2c544bc71def1164720"} err="failed to get container status \"49572669483d0245d824f84d5db355706620a14b91c0c2c544bc71def1164720\": rpc error: code = NotFound desc = could not find container \"49572669483d0245d824f84d5db355706620a14b91c0c2c544bc71def1164720\": container with ID starting with 49572669483d0245d824f84d5db355706620a14b91c0c2c544bc71def1164720 not found: ID does not exist" Dec 15 06:43:09 crc kubenswrapper[4747]: I1215 06:43:09.817725 4747 scope.go:117] "RemoveContainer" containerID="198e3670d1b34a4ab44ebf67f619a176997b9eea8bbae158beee6406fec62f56" Dec 15 06:43:09 crc kubenswrapper[4747]: E1215 06:43:09.818408 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"198e3670d1b34a4ab44ebf67f619a176997b9eea8bbae158beee6406fec62f56\": container with ID starting with 198e3670d1b34a4ab44ebf67f619a176997b9eea8bbae158beee6406fec62f56 not found: ID does not exist" containerID="198e3670d1b34a4ab44ebf67f619a176997b9eea8bbae158beee6406fec62f56" Dec 15 06:43:09 crc kubenswrapper[4747]: I1215 06:43:09.818450 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"198e3670d1b34a4ab44ebf67f619a176997b9eea8bbae158beee6406fec62f56"} err="failed to get container status \"198e3670d1b34a4ab44ebf67f619a176997b9eea8bbae158beee6406fec62f56\": rpc error: code = NotFound desc = could not find container \"198e3670d1b34a4ab44ebf67f619a176997b9eea8bbae158beee6406fec62f56\": container with ID starting with 198e3670d1b34a4ab44ebf67f619a176997b9eea8bbae158beee6406fec62f56 not found: ID does not exist" Dec 15 06:43:09 crc kubenswrapper[4747]: I1215 06:43:09.818479 4747 scope.go:117] "RemoveContainer" containerID="81e76935b7cdd06b6827738da1838770d2b12507667576aa9ce04daf183fe8cb" Dec 15 06:43:09 crc kubenswrapper[4747]: E1215 06:43:09.818808 4747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81e76935b7cdd06b6827738da1838770d2b12507667576aa9ce04daf183fe8cb\": container with ID starting with 81e76935b7cdd06b6827738da1838770d2b12507667576aa9ce04daf183fe8cb not found: ID does not exist" containerID="81e76935b7cdd06b6827738da1838770d2b12507667576aa9ce04daf183fe8cb" Dec 15 06:43:09 crc kubenswrapper[4747]: I1215 06:43:09.818837 4747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81e76935b7cdd06b6827738da1838770d2b12507667576aa9ce04daf183fe8cb"} err="failed to get container status \"81e76935b7cdd06b6827738da1838770d2b12507667576aa9ce04daf183fe8cb\": rpc error: code = NotFound desc = could not find container \"81e76935b7cdd06b6827738da1838770d2b12507667576aa9ce04daf183fe8cb\": container with ID starting with 81e76935b7cdd06b6827738da1838770d2b12507667576aa9ce04daf183fe8cb not found: ID does not exist" Dec 15 06:43:09 crc kubenswrapper[4747]: I1215 06:43:09.874214 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f-utilities\") pod \"8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f\" (UID: \"8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f\") " Dec 15 06:43:09 crc kubenswrapper[4747]: I1215 06:43:09.874371 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f-catalog-content\") pod \"8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f\" (UID: \"8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f\") " Dec 15 06:43:09 crc kubenswrapper[4747]: I1215 06:43:09.874406 4747 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xl77j\" (UniqueName: \"kubernetes.io/projected/8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f-kube-api-access-xl77j\") pod \"8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f\" (UID: \"8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f\") " Dec 15 06:43:09 crc kubenswrapper[4747]: I1215 06:43:09.875612 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f-utilities" (OuterVolumeSpecName: "utilities") pod "8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f" (UID: "8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 06:43:09 crc kubenswrapper[4747]: I1215 06:43:09.880670 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f-kube-api-access-xl77j" (OuterVolumeSpecName: "kube-api-access-xl77j") pod "8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f" (UID: "8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f"). InnerVolumeSpecName "kube-api-access-xl77j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 15 06:43:09 crc kubenswrapper[4747]: I1215 06:43:09.959352 4747 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f" (UID: "8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 15 06:43:09 crc kubenswrapper[4747]: I1215 06:43:09.976763 4747 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f-utilities\") on node \"crc\" DevicePath \"\"" Dec 15 06:43:09 crc kubenswrapper[4747]: I1215 06:43:09.976853 4747 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 15 06:43:09 crc kubenswrapper[4747]: I1215 06:43:09.976913 4747 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xl77j\" (UniqueName: \"kubernetes.io/projected/8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f-kube-api-access-xl77j\") on node \"crc\" DevicePath \"\"" Dec 15 06:43:10 crc kubenswrapper[4747]: I1215 06:43:10.748551 4747 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9m4d8" Dec 15 06:43:10 crc kubenswrapper[4747]: I1215 06:43:10.777382 4747 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9m4d8"] Dec 15 06:43:10 crc kubenswrapper[4747]: I1215 06:43:10.782864 4747 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9m4d8"] Dec 15 06:43:12 crc kubenswrapper[4747]: I1215 06:43:12.644026 4747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f" path="/var/lib/kubelet/pods/8f7aeb6b-55c9-4a4b-82ea-8c7d9e90fc1f/volumes" Dec 15 06:43:15 crc kubenswrapper[4747]: I1215 06:43:15.630432 4747 scope.go:117] "RemoveContainer" containerID="04957b75e1647a0c1079a2918e34078c34fa86cbc8c0b1900884016cbe7c9c82" Dec 15 06:43:15 crc kubenswrapper[4747]: E1215 06:43:15.631462 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:43:27 crc kubenswrapper[4747]: I1215 06:43:27.629116 4747 scope.go:117] "RemoveContainer" containerID="04957b75e1647a0c1079a2918e34078c34fa86cbc8c0b1900884016cbe7c9c82" Dec 15 06:43:27 crc kubenswrapper[4747]: E1215 06:43:27.630166 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:43:40 crc kubenswrapper[4747]: I1215 06:43:40.629011 4747 scope.go:117] "RemoveContainer" containerID="04957b75e1647a0c1079a2918e34078c34fa86cbc8c0b1900884016cbe7c9c82" Dec 15 06:43:40 crc kubenswrapper[4747]: E1215 06:43:40.629876 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:43:53 crc kubenswrapper[4747]: I1215 06:43:53.629969 4747 scope.go:117] "RemoveContainer" containerID="04957b75e1647a0c1079a2918e34078c34fa86cbc8c0b1900884016cbe7c9c82" Dec 15 06:43:53 crc kubenswrapper[4747]: E1215 06:43:53.630734 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:44:05 crc kubenswrapper[4747]: I1215 06:44:05.629480 4747 scope.go:117] "RemoveContainer" containerID="04957b75e1647a0c1079a2918e34078c34fa86cbc8c0b1900884016cbe7c9c82" Dec 15 06:44:05 crc kubenswrapper[4747]: E1215 06:44:05.630472 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c" Dec 15 06:44:18 crc kubenswrapper[4747]: I1215 06:44:18.629147 4747 scope.go:117] "RemoveContainer" containerID="04957b75e1647a0c1079a2918e34078c34fa86cbc8c0b1900884016cbe7c9c82" Dec 15 06:44:18 crc kubenswrapper[4747]: E1215 06:44:18.630157 4747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nldtn_openshift-machine-config-operator(1d50e5c9-7ce9-40c0-b942-01031654d27c)\"" pod="openshift-machine-config-operator/machine-config-daemon-nldtn" podUID="1d50e5c9-7ce9-40c0-b942-01031654d27c"